| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There are some possible reasons that have been thought:
- auditd lost. Each time I use `auditctl -b xxx` or `auditctl
--reset-lost`, there are always a big number of losts. at first i
thought it means how many auditd info was lost throw the net, or
how many was thrown because of the audit info queue in the kernel
was full. However, form the src code of kernel, it actually means
how much is thrown away as there's no listener of auditd info. In
other words, audit is a userspace-kernel function, but not two
independent parts.
- audit backlog size. As the above.
But when i only listen to the syscall "open", i can almost always
hear the info in the docker. So I think this may be because the
audit info production is flooding, while in this program i check this
and that, causes too much time, the consumption is far slower.
Next step, I will use the MVC, all recvd info will be push into the
database, and add a new independent part to make database clean and
clear.
The key problem is, a process can open file1 as fd 3, write, close,
and open file2 as fd 3, write, close: which means i must figure out
which file to write when "write" event comes. Now i check the
pid/fd/close_time in database to choose which is written, but find
and check doc also use lots of time. Maybe, use two collections, one
is fds that records files not closed, the other records closed files?
Besides, as clone/fork/pthread_create all uses syscall clone, but
their flags are different. Maybe i can also use `pid/tgid` pair to
distinguish between process and thread. Good idea.
Be quick, your internship has passed a half. What kinds of answer
will you hand in?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's the check(cooked Event) function that causes fileopen crushed,
and now I'am sorry to say that i've forgot why i add this function,
maybe to check ppid and pid in database in just one function but not
the head of each function. However, the check in each function isn't
deleted. I discover it by comparing source code with 5d244e3. In
theory this would only result in the increase of delay. How does it
affect on the fileopen and causes failure? No one knows.
The same to kernel connector. If we still add delay while pid exits,
the connector will say "Error recv: no enough buffer space", but if
we delete the delay, all modules work well. What actually makes the
delay in pid exit causes no enouth buffer of connector? How outra-
geous it is!
Now I've come back to the original question: when i start and use
docker quickly(`start && exec && exit` in just one command), the file
open/write/close is faithfully recorded; but if i use interactive
shell and use vim to change file in docker, nothing happens.
Why? Why? Why?
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
For some reasons, kernel-connector can catch exec event, but it
doesn't tell me about what the process exec and what're its args.
So we should use audit to collect these infomations, and complete
in the database.
However, there's different delays between connector and audit,
although they both use netlink socket, as a result of which, exec
may comes before fork. we deal with it the same way. But, there's
also exec event lost, may because of the check for ppid in exec
event, but it's necessary, and if is deleted, too much irrelavent
infomation would flood into database, i've tried. So make it there,
just go forward.
Besides, what's newly discovered is that pthread_create also use
clone syscall, but if pid 1 has a thread 2, the exec info will say
that pid 2 execs. So i shouldn't ignore connector msg that childPid
ne childTgid.
This is my first attempt to use git-submodule function in my own pro-
ject, also golang local package. Congratulations!
Now, fight to fix about file operations. Hope that there wouldn't
be too many fucking bugs.
|
| |
|
| |
|
| |
|
|
|
|
|
|
| |
this commit i successfully catch open/close syscall, and insert them
as an independent collection in mongodb otherwise along with pids.
and now I've record those open flag "O_TRUNC" as written.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There's 2 bugs from ancestor commits:
- In the 'things_left' tag commit(the grandpa of this commit), we
add a function that allows execve comes before fork, but when it
happens, I forget to insert the basic info (pid, ppid, etc.), as a
result of which it doesn't work in the designed way. Now it is well,
insert execve with pid and ppid, so that the fork event can find it
and finish other info. However, we shouldn't make start_stamp in
this case, so that it's also a flag. I've not removed the unused
execve info, waiting for the future.
- In the parent commit, the syscallRegex is changed, because when we
add more syscalls to be watched, we need more info about their params
but not only the first one. Instead of keeping using single a0 to get
the first param, i use argsRegex for all the params. But this change
causes mismatch of syscallRegex. Now it's fixed.
|
|
|
|
|
|
|
|
|
|
|
|
| |
To record it, we must listen to open/write and several syscalls,
and now I've add open into the 2nd coroutine. In syscall open,
what we should do is to judge the permission flag (the 2nd param
in the syscall), to find out if it can write to the file. If so,
the exit code is its file descriptor, and when write is called, the
audit shows only file descriptor but no file name.
So the next step is to add things into 3rd coroutine, to make the
whole program running again, and find out bugs.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The Most important work during this time is to find out solution
to the out-of-order bug. Discribe it here in detail: info from
audit may be out of order, which means fork may comes after execve,
even after exit. What an absurd penomenon to see a process not yet
created to work or exit!
To deal with this problem, I've tried several ways:
- in the 2nd coroutine, when EOE msg comes, if it's a fork/clone
event, send it immediately, otherwise wait for some time(such as
100 ms). But after all it delays longer, and has other problems.
- the 2nd coroutine doesn't send directly, but record all the finished
event id in a slice, and another thread checks once every one second,
if there are sth in slice, send corresponding events in the order of
event id. But: event that happens first doesn't always has lower id
or time, for example, 1 forks 2, then 2 execve, the audit in kernel
it self may gets execve before fork(maybe fork makes other settings),
which means execve has earlier timestamp and lower event id. The out-
of-order problem is not completely resolved. If we then add delays
to non-clone event, a more serious problem happens: we must use mutex
to lock the slice recording finished event id to prevent crush between
send thread and wait thread, but the wait thread can't get the mutex
again, because there are to much clone event and frequent send!
- So I use no delay but mongodb, when an execve comes, if pid is not
recorded, just insert it and wait for the fork. It does works, but
some other works is still left to do:
- what should i do if 2 forks 3 comes before 1 forks 2? Now I
suggest it doesn't happen, but what if?
- when execve comes before fork, i recorded it, but if this process
has a parent i don't care, delete, or stays there?
Also, as mentioned above, I've add EXECVE field in process into db,
records all the execve(time, and args) from the same process. Besides,
exit_timestamp and exit_code can be caught now, but too many process
has no exit info. This is also to be fixed.
Now, let's listen to the file changed by process. Don't forget the
to-do works listed above!
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
I failed to print the process tree out. While I'm printing the tree,
the tree itself gets changed, maybe deleted. What's more, the output
show that there are 4 lines with the same ppid and pid, how an absurd
result! It may be caused by multi-thread. So, use database instead.
Mongodb uses bson(binary json) to store data but not relational
database like mysql, which means it's more easy to use.(?)
Beside inserting, I've also solved a question that "fork" is called
once but returns twice. For instance, pid 1 forked pid 2, in the
audit log it's not an event "syscall=clone,ppid=1,pid=2", but actually
two events "syscall=clone,exit=0,ppid=0,pid=1" and "syscall=clone,exit=
2,ppid=0,pid=1", which is just what we see in sys_fork in kernel source.
To deal with this, when syscall is clone and exit is 0 we just drop it.
Left question: To find out the exit code when a process exit/exit_group,
and finish the code to record it in the database.
|
|
Put all the src code in only one file is to ugly, so devide it!
and mv them into src dir to keep the whole repo clear.
|