The magic appears when you use logging across multiple modules. Suppose you have an app called app.py that loads a function from a module as in the following code:
# app.py from my_module import my_func my_func()
# my_module.py import logging logger = logging.getLogger(__name__) def my_func(): logger.error('message from mod 1') logger.debug('a debugging message')
If you run app.py you will get the error:
No handlers could be found for logger "my_module"
But add an import in app.py and suddenly the error in my_module.py goes away:
# app.py from my_module import my_func import logging logging.error('A message from app.py') my_func()
I lied, you also have to have line 5.
I cannot think of another case where an import in one module effects another module. That's why I did not like using logger. I never got comfortable with how it worked across modules. But magic is also powerful. Time to learn it. Before we go any further, you should read the basic tutorial in the Python docs.
The questions we are going to address are:
- how does logging come to life?
- how does it stay alive when execution passes to a new module?
The answer is that once the logger is configured, it becomes part of the Python interpreter process that is executing your code. It effectively becomes global.
Part of the confusion also comes from how easy it is to configure logging. As shown in the snippet above, simply importing logging and logging a message auto-magically configures a logger for your entire project. Convenient, but un-nerving if you do not know its happening.
Here is a simple example of a more explicitly configured logger:
Part of the confusion also comes from how easy it is to configure logging. As shown in the snippet above, simply importing logging and logging a message auto-magically configures a logger for your entire project. Convenient, but un-nerving if you do not know its happening.
Here is a simple example of a more explicitly configured logger:
# app.py from my_module import my_func import logging logging.basicConfig(filename='example.log',level=logging.DEBUG) logging.error('A message from app.py') my_func()
The logger in the previous example wrote to the console. This logger writes to a file and changes the default level. The amazing thing here is that the log messages in mod1.py also show up in this file.
This also explains why the line:
logger = logging.getLogger(__name__)
shows up in so many modules. This line causes the logging messages to be prepended with the module name. Pretty helpful when you have 47 different modules writing to the same log file.
I think that does it for me. With these mysteries cleared up, the rest of the Python docs are pretty straight forward.
But since we have gotten this far, I want to share a debugging trick. Often when I am debugging a module, I run it as __main__ along with some setup code. Here is how to make the logger work with __main__ and when it is called as part of a larger app:
# mod1.py import logging logger = logging.getLogger(__name__) def mod1(): logger.error('message from mod 1') logger.debug('a debugging message') if __name__=='__main__': logging.basicConfig(level=logging.DEBUG) mod1()
This sets up a different logger than the one when the module is not __main__. In the app, you may have the level set at ERROR. But when running the module as main, you can have the level be DEBUG. Also, this debugger writes to the console, which is much more convenient than writing to a file. It essentially creates print statements that can be turned on and off.
You need to learn about singleton design pattern in order to gain understanding how logging works.
ReplyDeleteYou need to learn about singleton design pattern in order to gain understanding how logging works.
ReplyDeleteperde modelleri
ReplyDeleteMobil onay
Mobil Ödeme Bozdurma
nftnasilalinir.com
ankara evden eve nakliyat
trafik sigortası
dedektör
web sitesi kurma
AŞK KİTAPLARI
Smm Panel
ReplyDeleteSMM PANEL
iş ilanları
instagram takipçi satın al
Hırdavatçı Burada
https://www.beyazesyateknikservisi.com.tr/
SERVİS
tiktok jeton hilesi