Do not restart cron on failure in kubernetes

You have to set:

backoffLimit: 0 

restartPolicy: Never 

concurrencyPolicy: Forbid.

backoffLimit means the number of times it will try before it is considered failed. The default is 6.

concurrencyPolicy set to Forbid means it will run 0 or 1 times, but not more.

restartPolicy set to Never means it won’t restart on failure.

You need to do all 3 of these things, or your cronjob may run more than once.

elasticsearch error – this cluster currently has [1000]/[1000] maximum shards open

Short term solution: increase the number of shards to 3000

curl -XGET localhost:9200/_cluster/allocation/explain?pretty

After that, please try the following to reallocate the unassigned shards. First, set the replica option to 0
curl -XPUT ' localhost:9200/wazuh-alerts-*/_settings' -H 'Content-Type: application/json' -d '{ "index": { "number_of_replicas": "0" } }'

At the same time, execute this in another terminal to see the status:
watch -n0 'curl -s localhost:9200/_cluster/health?pretty | grep "active_shards_percent"'

If that doesn't work, please try this solution as a workaround. It's about to increase the shards limit to 3k so you can take control of them meanwhile an architectural solution is implemented. 

curl -X PUT localhost:9200/_cluster/settings -H "Content-Type: application/json" -d '{ "persistent": { "cluster.max_shards_per_node": "3000" } }'

Long term solution: Add additional data nodes to your ES cluster in the near future.

What is Critical Mass for a product

“Critical mass for a product is the point at which it becomes self-sustaining. Adoption of the product (and retention of existing users) reaches a point where the product becomes profitable and continues to be profitable over time.”

“It is difficult to determine exactly when that point will be reached or where exactly that point is – however, we can take action to develop KPIs which enable us to see if we are progressing towards critical mass. Some business models are highly dependent on reaching critical mass early and these should be approached with caution if you do not have a large marketing budget to drive critical mass.”


Mount a host directory as a volume in docker compose

Short Syntax

Using the host : guest format you can do any of the following:

  # Just specify a path and let the Engine create a volume
  - /var/lib/mysql

  # Specify an absolute path mapping
  - /opt/data:/var/lib/mysql

  # Path on the host, relative to the Compose file
  - ./cache:/tmp/cache

  # User-relative path
  - ~/configs:/etc/configs/:ro

  # Named volume
  - datavolume:/var/lib/mysql

Resize EBS volume on AWS

  1. Go to your volume and choose “Modify Volume” under “Actions.” — This is from AWS console, process takes about 7-8 minutes.
  2. Now check the partition size. – lsblk
  3. Now run this for the drive to expand – sudo growpart /dev/xvdf 1
  4. If xfs filesystem run — xfs_gwith rowfs -d /mongo-data
  5. If ext filesystem run — resize2fs /dev/xvdf1
  6. check disk space df -h


Set/View root user capabilities

On a host root user comes with a set of privileges which can be seen under

cat /usr/include/linux/capability.h

Docker runs a root user with limited set of capabilities. To give docker user extra capabilities as the normal root user would add param –cap-add as such

docker run --cap-add MAC_ADMIN ubutu

To drop privileges to the docker root user add param –cap-drop as such

docker run --cap-drop KILL ubuntu

In case you want to run docker root user with all privileges add param –privileged

docker run --privileged ubuntu