NSI/IEEE 1991 defines reliability as “the probability of failure-free software operation for a specified period of time in a specified environment”. That sounds pretty much like my definition of availability.
]]>]]>Where do you get your dopamine?
- The answer is predictive of your behavior
- Better to get your dopamine from improving your ideas than from having them validated
- It’s ok to get yours from “making things happen”
hash
function is not guaranteed
to be the same across different Python versions, platforms or executions of the
same program.
Lets take a look at the following example:
$ python -c "print(hash('foo'))"
-677362727710324010
$ python -c "print(hash('foo'))"
2165398033220216763
$ python -c "print(hash('foo'))"
5782774651590270115
As you can see, the output of hash
function is different for the same input
"foo"
. This is not a bug, but a feature in Python 3.3 and above. The reason
for this is that Python 3.3 introduced a Hash randomization as a security feature
to prevent attackers from using hash collision for denial-of-service attachs.
Every time you start a Python program, a random value is generated and used to
salt the hash values. This ensures that the hash values are consistent within
a single Python run. But, the hash values will be different across different
Python runs.
You could disable hash randomization by setting the environment variable
PYTHONHASHSEED
to 0
, but this is not recommended.
If you want to hash arbitrary objects deterministically, you can use the ubelt or joblib.hashing modules.
Here’s an example of using ubelt
import ubelt as ub
print(ub.hash_data('foo', hasher='md5', base='abc', convert=False))
Result:
$ python -c "import ubelt as ub; print(ub.hash_data('foo', hasher='md5', base='abc', convert=False))"
blhtggyvbuyhspdolqxdrhoajdka
$ python -c "import ubelt as ub; print(ub.hash_data('foo', hasher='md5', base='abc', convert=False))"
blhtggyvbuyhspdolqxdrhoajdka
$ python -c "import ubelt as ub; print(ub.hash_data('foo', hasher='md5', base='abc', convert=False))"
blhtggyvbuyhspdolqxdrhoajdka
Found this interesting site which maps the concepts in the Bhagavad Gita Concept Maps | Gita Supersite. The map is huge and I am still exploring it. It made me realize that I have a lot to learn about the Gita. I did read Gita when I was young but I don’t think I grasped a lot of the ideas in the book.
▣
▣
▣
▣
▣
▣
▣
]]>▣
]]>]]>
- Pattern matches can act upon ints, floats, strings and other types as well as objects. Method dispatch requires an object.
- Pattern matches can act upon several different values simultaneously: parallel pattern matching. Method dispatch is limited to the single this case in mainstream languages. -Patterns can be nested, allowing dispatch over trees of arbitrary depth. Method dispatch is limited to the non-nested case.
- Or-patterns allow subpatterns to be shared. Method dispatch only allows sharing when methods are from classes that happen to share a base class. Otherwise you must manually factor out the commonality into a separate member (giving it a name) and then manually insert calls from all appropriate places to this superfluous function.
- Pattern matching provides exhaustiveness and redundancy checking which catches many errors and is particularly useful when types evolve during development. Object orientation provides exhaustiveness checking (interface implementations must implement all members) but not redundancy checking.
- Non-trivial parallel pattern matches are optimized for you by the F# compiler. Method dispatch does not convey enough information to the compiler’s optimizer so comparable performance can only be achieved in other mainstream languages by painstakingly optimizing the decision tree by hand, resulting in unmaintainable code.
- Active patterns allow you to inject custom dispatch semantics.
I recently got more serious about learning rust and have noticed a lot of similarities between the two languages. One major difference is F# is garbage collected (because of C# base). F# is also a lot more forgiving than rust imo.
]]>_wait_for_tstate_lock
.
One way to avoid it is to clear the queue before exit.
To figure out which queue is not empty,
import multiprocessing
import inspect
# initialize a queue
q = multiprocessing.Queue(10)
# get function where the queue is created or where we are adding objects to queue
# so that we can identify which queue it is
caller = inspect.getframeinfo(inspect.stack()[1][0])
thread_name = f"MultiQueue_{caller.filename}:{caller.lineno}"
# add new object
q.put("hello")
# set thread name
q._thread.name = thread_name
QueueFeederThread
is stared after you put an object into the queue.
After the first time you put an object into the queue, you can set a name to the thread.
You can then use py-spy
to figure out which thread is preventing your program from exiting.
py-spy dump -p <pid>
I found a framework by Alex Vermeer very interesting. He divides life in to certain categories and plans out what to do the next year. I think I have enough time this year to work on this.