• You MUST read the Babiato Rules before making your first post otherwise you may get permanent warning points or a permanent Ban.

    Our resources on Babiato Forum are CLEAN and SAFE. So you can use them for development and testing purposes. If your are on Windows and have an antivirus that alerts you about a possible infection: Know it's a false positive because all scripts are double checked by our experts. We advise you to add Babiato to trusted sites/sources or disable your antivirus momentarily while downloading a resource. "Enjoy your presence on Babiato"

Redis Object Cache Pro By ObjectCache

Redis Object Cache Pro By ObjectCache v.1.20.2 Nulled

No permission to download
And don't forget to check if you have the zstd library itself installed :)

dpkg-query --list "*zstd*" | grep "ii"

libzstd-dev:amd64 1.5.4+dfsg2-5 amd64 fast lossless compression algorithm -- development files
libzstd1:amd64 1.5.4+dfsg2-5 amd64 fast lossless compression algorithm
php-zstd 0.13.1-1+0~20231125.8+debian12~1.gbp48ebd9 amd64 Zstandard extension for PHP
php8.2-zstd 0.13.1-1+0~20231125.8+debian12~1.gbp48ebd9 amd64 Zstandard extension for PHP
php8.3-zstd 0.13.1-1+0~20231125.8+debian12~1.gbp48ebd9 amd64 Zstandard extension for PHP
zstd 1.5.4+dfsg2-5 amd64 fast lossless compression algorithm -- CLI tool
 
Fascinating... So I am a redis/objectcache user and in this latest release 1.20.2 I did updates and applied my own mods for performance and null and it worked great everywhere, BUT then today I needed to set up a new site and saw the licensing error!

Apparently something about having the plugin installed previously was providing some remnant maybe a configuration key somewhere, I don't know or care enough to dig further, but I do know that this plugin is not properly nulled for new users! So I found a fix.

This time I'll spare everyone the analysis and just share the weadown fix release which you can diff yourself if you are curious what they fixed but I vouch for it being fully working. Whoever does the nulls for this great utility (the authors want $80 per month wtf!?) please integrate the fixed technique into your digital jujistu. Thanks!
 

Attachments

  • object-cache-pro_v1.20.2__weadown--I_compared+tested_this_legit--maxv.zip
    401.5 KB · Views: 161
First make sure that your redis server is properly installed and running. Then install phpredis extension so that all php script including this plugin can access the redis server

Try this command as per the distro:

dnf install php-zstd //Enterprise Linux 8 or 9 apt install php8.1-zstd //Ubuntu

Also, replace the lines:

Code:
define('WP_REDIS_DISABLED', getenv('WP_REDIS_DISABLED') ?: false);
define( 'WP_REDIS_PREFIX', 'yourdomain.com:' );  <-- remove
define( 'WP_MEMORY_LIMIT', '5120M' ); <-- that's nuts
define( 'WP_MAX_MEMORY_LIMIT', '6400M' );  <--- remove and configure it in php.ini

with:

Code:
define('WP_REDIS_DISABLED', false);
define( 'WP_MEMORY_LIMIT', '128M' );

And don't forget to check if you have the zstd library itself installed :)
First of all thank you for the prompt response 😊 I have cpanel/centos in this case what should I look for and I am sure I have passed it correctly with PECL
 
I have a dedicated server with 100+ websites from my customers and now I'm reading docs for configuration and now I'm aware about this "'database' => 0, // change for each site". Standard database in redis.conf are only 16 database. Can I increase to 200? And every client need to use unique database number id? or? This is so confusing for me. How will my users know which number to put there? How to monitor that there is no collision? Thanks!
 
@randomdecko Redis has no limit on the number of databases. The common ID you are considering(?) makes sense if you are using for example WP multisite. Where of course you won't be caching people's private data in one database :)
 
No, I don't have WP multisite. I have 100 diferent users with their WP sites. So, now i need to put manually? What is the best way to track sites on which database there are no collisions?
 
I have a dedicated server with 100+ websites from my customers and now I'm reading docs for configuration and now I'm aware about this "'database' => 0, // change for each site". Standard database in redis.conf are only 16 database. Can I increase to 200? And every client need to use unique database number id? or? This is so confusing for me. How will my users know which number to put there? How to monitor that there is no collision? Thanks!
I think you will want to not worry about separate databases (since if using a single redis install they all share the resources available anyway) and instead look to use unique prefixes for data stored by each site. The old way to do this is to add a line like this in each wp-config.php file for the customer:
Code:
define('WP_CACHE_KEY_SALT', 'unique_prefix_for_site:');

However it is now widely replaced with the following instead, as the newest and best/recommended solution:
Code:
define('WP_REDIS_PREFIX', 'unique_prefix_for_site:');

So, for example you could use customer logins or the fully qualified domain name (hostname) of the site.
Code:
define('WP_REDIS_PREFIX', 'www.example.com:');

If you do it this way there will never be collisions and in theory nothing to keep track of since each hostname will be unique (this is why I say hostname not domain name - because there might be for example shop.example.com and blog.example.com so you shouldn't just use example.com but the full fqdn/hostname). I used to use unique database numbers but it's a hassle to keep track of them, and this method is safe and also is recommended by the redis object cache pro people.

You can make this part of your process to add that config variable to the wp-config.php, or have it as part of a custom plugin that you use which ensures the line is present and contains the current hostname as the prefix string, or have a script that checks this, any number of possible approaches. I have switched to this method myself and no longer specify separate database numbers since they share resources anyway it is a tree falling in the forest - it feels better but it is in fact no better to have that. Also, there is a limit to the number of databases but there is no limit to the number of prefixes.

Minor issue related to flushing cache if you don't use Object Cache Pro - object cache pro when you flush the redis cache it will go by the prefix and only clear for the one site.. however I suspect (but don't know!) that other cache plugins like lightspeed or wp redis might flush cache on a per-database number basis and not even care about the prefix.. if true, although not a big deal, you should be aware of it. (flushing cache is not a big deal and is in fact recommended periodically).
 
  • Like
Reactions: thelab
I think you will want to not worry about separate databases (since if using a single redis install they all share the resources available anyway) and instead look to use unique prefixes for data stored by each site. The old way to do this is to add a line like this in each wp-config.php file for the customer:
Code:
define('WP_CACHE_KEY_SALT', 'unique_prefix_for_site:');

However it is now widely replaced with the following instead, as the newest and best/recommended solution:
Code:
define('WP_REDIS_PREFIX', 'unique_prefix_for_site:');

So, for example you could use customer logins or the fully qualified domain name (hostname) of the site.
Code:
define('WP_REDIS_PREFIX', 'www.example.com:');

If you do it this way there will never be collisions and in theory nothing to keep track of since each hostname will be unique (this is why I say hostname not domain name - because there might be for example shop.example.com and blog.example.com so you shouldn't just use example.com but the full fqdn/hostname). I used to use unique database numbers but it's a hassle to keep track of them, and this method is safe and also is recommended by the redis object cache pro people.

You can make this part of your process to add that config variable to the wp-config.php, or have it as part of a custom plugin that you use which ensures the line is present and contains the current hostname as the prefix string, or have a script that checks this, any number of possible approaches. I have switched to this method myself and no longer specify separate database numbers since they share resources anyway it is a tree falling in the forest - it feels better but it is in fact no better to have that. Also, there is a limit to the number of databases but there is no limit to the number of prefixes.

Minor issue related to flushing cache if you don't use Object Cache Pro - object cache pro when you flush the redis cache it will go by the prefix and only clear for the one site.. however I suspect (but don't know!) that other cache plugins like lightspeed or wp redis might flush cache on a per-database number basis and not even care about the prefix.. if true, although not a big deal, you should be aware of it. (flushing cache is not a big deal and is in fact recommended periodically).
what about when using multisite some with subdomain example: sub1.maindomain.com or mapping the subsite with unique new domain example: sub1.maindomain.com > sub1.com?
 
what about when using multisite some with subdomain example: sub1.maindomain.com or mapping the subsite with unique new domain example: sub1.maindomain.com > sub1.com?
It's your setup, you should know something like this. Basically so long as you pick something consistent you will always have a unique key. In your contrived example "sub1.maindomain.com" is the unique string that is unique for that site (regardless of it rendering or forwarding or presenting as something else) and then if you have that wordpress insatnce use "sub1.maindomain.com" as the prefix for it's redis entries then it will be the only site that uses that prefix and therefore has unique entries in the Redis database.

If you wanted to, you could use completely random strings or a hash of something that's unique or it could even just be a hash of the current timestamp, date and time or something. It doesn't really matter so long as it's going to be different for each site. Each of the examples I gave would work. I think this sort of thing works best when you, I mean, at least in theory, when you use something that is readable by a human, like the fully qualified domain name, in this example sub1.maindomain.com, that sort of thing is probably going to work out I think just fine and you shouldn't have any collisions with that. But if you wanted to, you could make it just a complete mishmash of random numbers and letters. It could be a hash of something, like I said, it could be the hash of the current time because presumably the time always changes and unless you have a weird race condition where two sites are being created in the same second, then you'll just never have a collision with that either. But that's meaningless to a human that might be looking at it at some point and I think although you're pretty unlikely to be digging around manually through a Redis database and trying to parse which tables belong to which WordPress instance, that's not going to happen. But if it did, then at least they would be clearly labeled and you would instantly see who owned which entries. There might conceivably be some value to having the FQDN as the prefix, I don't know, but it sure is simple.

OH, I did just realize one value of hashing is that this way unicode domain names aren't a problem because the hash is ascii (sha256 for example) and I'm not sure but if you had some chinese characters in the host name and used that as the redis prefix that might not work, I really don't know. You could also just normalize to ascii using various well established functions for that.

The bottom line is that with the prefix method you don't have limits like you do with picking separate database numbers, and it could be very easy to manage if you use something like the fqdn idea.
 
I think you will want to not worry about separate databases (since if using a single redis install they all share the resources available anyway) and instead look to use unique prefixes for data stored by each site. The old way to do this is to add a line like this in each wp-config.php file for the customer:
Code:
define('WP_CACHE_KEY_SALT', 'unique_prefix_for_site:');

However it is now widely replaced with the following instead, as the newest and best/recommended solution:
Code:
define('WP_REDIS_PREFIX', 'unique_prefix_for_site:');

So, for example you could use customer logins or the fully qualified domain name (hostname) of the site.
Code:
define('WP_REDIS_PREFIX', 'www.example.com:');

If you do it this way there will never be collisions and in theory nothing to keep track of since each hostname will be unique (this is why I say hostname not domain name - because there might be for example shop.example.com and blog.example.com so you shouldn't just use example.com but the full fqdn/hostname). I used to use unique database numbers but it's a hassle to keep track of them, and this method is safe and also is recommended by the redis object cache pro people.

You can make this part of your process to add that config variable to the wp-config.php, or have it as part of a custom plugin that you use which ensures the line is present and contains the current hostname as the prefix string, or have a script that checks this, any number of possible approaches. I have switched to this method myself and no longer specify separate database numbers since they share resources anyway it is a tree falling in the forest - it feels better but it is in fact no better to have that. Also, there is a limit to the number of databases but there is no limit to the number of prefixes.

Minor issue related to flushing cache if you don't use Object Cache Pro - object cache pro when you flush the redis cache it will go by the prefix and only clear for the one site.. however I suspect (but don't know!) that other cache plugins like lightspeed or wp redis might flush cache on a per-database number basis and not even care about the prefix.. if true, although not a big deal, you should be aware of it. (flushing cache is not a big deal and is in fact recommended periodically).

I got this answer from the plugin support.

Each client should have their own database and you can bump the number to 1000 in your redis.conf without performance issues.

I agree with you. It's big problem to keep track of them.

I used to use unique database numbers but it's a hassle to keep track of them, and this method is safe and also is recommended by the Redis object cache pro people.

I use this method on my 2 clients websites and automatically there will be a collision between sites, because they use the same database and different prefix sitenames.
Code:
define('WP_REDIS_CONFIG', [
    'token' => '@Y:R;Vl^W9wCX[;,6s]aSF2XY-pyJaj%@7S9-kf@U)HZ^5Tr)A*F9hDjoFl5',
    'host' => '127.0.0.1',
    'port' => 6379,
    'database' => 0, // change for each site
    'timeout' => 0.5,
    'read_timeout' => 0.5,
    'retry_interval' => 10,
    'retries' => 3,
    'backoff' => 'smart',
    'compression' => 'zstd', // `zstd` compresses smaller, `lz4` compresses faster
    'serializer' => 'igbinary',
    'async_flush' => true,
    'split_alloptions' => true,
    'prefetch' => true,
    'strict' => true,
    'debug' => false,
    'save_commands' => false,
]);

define('WP_REDIS_DISABLED', getenv('WP_REDIS_DISABLED') ?: false);

define( 'WP_REDIS_PREFIX', 'domainname.com:' );
 
I use this method on my 2 clients websites and automatically there will be a collision between sites, because they use the same database and different prefix sitenames.
Surely this is a typo or translation error? You don't really mean to say there "will be a collision", right? You mean no collision? Because when using different prefixes they are in fact totally different keys and cannot collide.
 
Code:
define('WP_CACHE_KEY_SALT', 'unique_prefix_for_site:');
However it is now widely replaced with the following instead, as the newest and best/recommended solution:
Code:
define('WP_REDIS_PREFIX', 'unique_prefix_for_site:');
So, for example you could use customer logins or the fully qualified domain name (hostname) of the site.
Code:
define('WP_REDIS_PREFIX', 'www.example.com:');


Great tutorial.

So, www.example.com actually mean unique_prefix_for_site, right?
 
  • Like
Reactions: maxv
Great tutorial.

So, www.example.com actually mean unique_prefix_for_site, right?
correct, after using a placeholder I showed an example usage and example.com is the old-school domain that is supposed to be used for such examples. This could be "babiato.com" or "daniellisitewhatever.com" etc. example.com in recent years is a real live website but is still just there for example purposes. The founders of the internet reserved "example.com" for these exact purposes so people wouldn't use examples that were real sites owned by other people, however despite their vision people still do screwy things like the young hackers who don't know any better use "target.com" as their example but target is of course a real site (the shopping center/store) and not the abstract target as in "subject of one's scan". Anyway there are lots of abuse there like people putting "yourwebsite.com" or "website.com" etc which are surely real sites owned by people. Seeing someone use "example.com" properly as an illustration is a surefire sign they are older generation, or, at least that they learned from old books :)

Also I personally an nervous about using periods in key names so I use numbers and letters and would make my own settings "examplecom" instead of "example.com" for the key salt, but that's just because of old habits and problems with non alphanumeric characters.. but I'm sure periods and underscores etc are fine. (I verified this and periods are for sure OK)
 

simplysexyboy I'd be interested in seeing any benchmarks you've performed on the redis cache pro. I've seen some comments by people questioning the existence of any performance benefits and afaik the plug-in's author didn't provide that evidence in a clear an unambiguous way. Cheers​

 

simplysexyboy I'd be interested in seeing any benchmarks you've performed on the redis cache pro. I've seen some comments by people questioning the existence of any performance benefits and afaik the plug-in's author didn't provide that evidence in a clear an unambiguous way. Cheers​


Valid points.

Plugin author is very arrogant and never clarified on this aspect.
 
  • Like
Reactions: Thratchen

simplysexyboy I'd be interested in seeing any benchmarks you've performed on the redis cache pro. I've seen some comments by people questioning the existence of any performance benefits and afaik the plug-in's author didn't provide that evidence in a clear an unambiguous way. Cheers​



This is general understanding about Redis object optimization.

Redis object caching offers numerous benefits, enhancing website performance by storing frequently accessed data in memory for rapid retrieval. It reduces database load, decreases page load times, and improves scalability. With its efficient data structures and high-speed operations, Redis optimizes resource utilization, enhancing user experience and site responsiveness.

Testing simulators do not show any improvement. However, there might be some improvement in data trax in terms of speed that is out of scope of simulators.

Thanks


/\
 
Yes, I've seen the author be very rude when people questioning the performance gains of the plug-in.

This user showed his site was slower when using "Redis Object Cache".

 
Yes, I've seen the author be very rude when people questioning the performance gains of the plug-in.

This user showed his site was slower when using "Redis Object Cache".

What a cunning fella! He changed the conversation and asked the user to buy his pro plugin. Instead of relevantly clearing doubts, he kept blaming the user throughout the conversation. Lolz.

Thanks

/\
 
  • Like
Reactions: Thratchen
AdBlock Detected

We get it, advertisements are annoying!

However in order to keep our huge array of resources free of charge we need to generate income from ads so to use the site you will need to turn off your adblocker.

If you'd like to have an ad free experience you can become a Babiato Lover by donating as little as $5 per month. Click on the Donate menu tab for more info.

I've Disabled AdBlock