-
August 27th, 2019, 07:07 PM
#1
Is there a way to remove or reduce randomness from debugging?
Assuming there are expressions in the code in the log, then searching for the log expression can lead to lines or lines of source code
One recipe can be this
1- Find the first error message in the log.
2- Find the corresponding line in the code.
3- Find incorrect variable / incorrect function with watch / backtrace.
Is there a complete and detailed algorithm, or Ã*nstruction or standard, especially for gdb and C ++
For example
Consider a large application, with C ++ and Linux, if you go wrong and find a variable with a bad value, how do you determine where it got value. With grep in the source or code execution and watch ? With what pattern of debug do you track the bad value variable to the first variable that caused the error probably related to a table ?
Last edited by alwaystudent; August 27th, 2019 at 10:16 PM.
-
August 29th, 2019, 03:24 PM
#2
Re: Is there a way to solve the problem of debugging as an algorithm or instruction,
I use a modern debugger/ide which allows me to click on an error and it takes me to the offending line of code.
-
September 1st, 2019, 02:39 AM
#3
Re: Is there a way to remove or reduce randomness from debugging?
Originally Posted by alwaystudent
With grep in the source or code execution and watch ? With what pattern of debug do you track the bad value variable to the first variable that caused the error probably related to a table ?
It sounds like you're using Emacs and the GNU debugger. In that case it's better if you search for info about these specific tools otherwise you only get general advice like use a modern IDE or use modern program design. I found this but it's only an example,
https://undo.io/resources/gdb-catchpoints/
Tags for this Thread
Posting Permissions
- You may not post new threads
- You may not post replies
- You may not post attachments
- You may not edit your posts
-
Forum Rules
|
Click Here to Expand Forum to Full Width
|