Useful info

-- brenoorzari

Daily Report

Important Files/Folders that I don't remember the date

The important files/folders inside MG are:

  • Background:
    • all the withelecmuon* (25000 events);
  • Signal:
    • truth_studies-master;
    • mass_dark_Higgs:
      • run_01-03 - Default CMS card (10000 events);
      • run_04-06 - CMS card filtered (10000 events);
      • run_07-09 - CMS card filtered with btagging change (10000 events);
    • signal_mass_dark_Higgs_more_events - CMS card filtered with btagging change (25000 events);
    • DM_signal_withelecmuon - CMS card filtered with btagging change (25000 events);
    • DM_signal_withelecmuon_change_recojet - CMS card filtered with btagging and jet reco object change (25000 events);
  • Files:
    • steering.dat;
    • run_card.dat;
    • param_card.dat;
    • ~/Downloads/MG5_aMC_v2_6_4/mass_dark_Higgs/Cards/delphes_card.dat;

The important files inside Delphes are:

  • .pdf:
    • all the fatjet_mass*;
  • Background:
    • all the withelecmuon* (25000 events);
  • Signal:
    • all the mass* (10000 events);
    • all the filtered_mass* (10000 events);
    • all the btag_change_filtered_mass* (10000 events);
    • all the more_events_mass* (25000 events);
    • all the nojet_mass* (25000 events with the normlepeff* card);
  • Macros:
    • plot_varios_fjetmass.C;
    • maisum_dark_Higgs.C;
    • sanity_macro.C;

Important Info

Luminosity for 25000 events (in fb^-1):

  • Backgrounds:
    • Z: 1454,44;
    • W: 439,71;
    • Diboson: 489,06;
    • top: 476,95;

BG Process Events Veto All cuts Lumin. fb^{-1}
Z + bbbar 25000 24989 580 1454,44
W + bbbar 11407 4245 91 439,71
Diboson 25000 4996 50 489,06
ttbar 25000 12587 208 476,95

Result Diboson Z+bb W+bb tt
Paper 0.56+/-0.02 2.42+/-0.07 1.16+/-0.06 2.83+/-0.12
Mine 0.22 1.05 0.48 1.09

Result Diboson Z+bb W+bb tt
BG/Diboson (paper) 1 4.32 2.07 5.05
BG/Diboson (mine) 1 4.77 2.18 4.95

  • Signal:
    • m_{hs} = 50 geV - 34,36;
    • m_{hs} = 70 geV - 42,20;
    • m_{hs} = 90 geV - 51,21;

26/02/2019

Simulated the signal using the CMS card filtered, with the b-tagging eff changed, and with the object to reconstruct the jets also changed. The name of the folder inside the MG folder is DM_signal_withelecmuon_change_recojet. The ROOT files inside the Delphes folder are named DM_signal_withelecmuon_change_recojet50.root, DM_signal_withelecmuon_change_recojet70.root, DM_signal_withelecmuon_change_recojet90.root. The macros plot_varios_fjetmass.C and maisum_dark_Higgs.C were used in those files. The pdf file fatjet_mass_CMSrecojet.pdf was created.

  • The signal was simulated using 25000 events for each mass of the dark Higgs, with the following luminosities (fb^-1):
    • m_{hs} = 50 geV - 34,36;
    • m_{hs} = 70 geV - 42,20;
    • m_{hs} = 90 geV - 51,21.

Changed the delphes_card.dat to produce the gen fat jets. Started simulating the signal with it to see what happens with the b-tagging. The name of the folder inside the MG folder is DM_signal_withelecmuon_genfatjet. The ROOT files inside the Delphes folder are named DM_signal_withelecmuon_genfatjet50, DM_signal_withelecmuon_genfatjet70, DM_signal_withelecmuon_genfatjet90. The macros plot_varios_fjetmass.C and maisum_dark_Higgs.C were used in those files. The pdf file fatjet_mass_CMSgenfatjet.pdf was created. I'll create another macro to analyze the generated fat jets, just like the reconstructed ones to see the differences.

  • The signal was simulated using 25000 events for each mass of the dark Higgs, with the following luminosities (fb^-1):
    • m_{hs} = 50 geV - 34,36;
    • m_{hs} = 70 geV - 42,20;
    • m_{hs} = 90 geV - 51,21.

The next step will be to make the b-tagging by hand, that is, try to match the generated b quarks to the jets (gen and reco), and apply the paper eff to it. Also, the test about the random number could be done with one of the masses to check its role in the fat jet invariant mass histograms. The slides from 25/02/2019 that I showed to Thiago must be changed to be well understood.


08/04/2019

Got a graph exactly like the one in "Hunting the dark Higgs" paper. The name of the file is ALL_backgrounds_mJ.pdf. Also, the macro used to do it is ALL_backgrounds_combined_rightratio.C with the files withelecmuon_ for the backgrounds. I've rescaled the luminosities to 3.2 fb^{-1}, and used a primitive k-factor to do it (choosed 4 because it's the proportion to the ATLAS results). There are some discrepancies between my graph and the paper one that I must talk to Thiago. But it's something.

* bg.png:
bg.png

The figure on the left side is from arXiv:1701.08780 [hep-ph].

I've got the right lepton veto using the reconstructed leptons instead of the generated leptons (I guess I was getting ALL of the generated leptons, instead of only the isolated ones). Less then 1% of the signal events were cut using this veto. My problem is still in the b-tagging and the MET cut. The number of events reduced by the MET cut is expected since I'm getting only the end of the tail events (I'll understand this sentence every time). The b-tagging is excluding almost 100% of the signal events, and that is disturbing.

To solve it, I've generated some new signal events TREES from Delphes for the 3 dark Higgs masses (10000 events each), which contain the gen particle, the MET, the jet, gen jet, fat jet, gen fat jet, reco electrons and muons branches to do the b-tagging by hand and check what is happening. The folder is named DM_signal_withelecmuon_btagging_test, and the files are DM_signal_withelecmuon_btagging_test50, DM_signal_withelecmuon_btagging_test70, DM_signal_withelecmuon_btagging_test90. The macro genfatjet_macro.C was used in those files.

Also, I've build a macro (btagging_namao.C) to check Delphes b-tagging and it's WRONG! In the .root files, there is no b-tagging in the jets, but when I do it by hand, I find a lot (more than zero in this case) of b-tagged jets. I'm testing the jet->Flavor object also, and it seems to be working fine. Don't know what is the problem with Delphes. The next step is to implement my b-tagging in the real files and get the number of events that pass all of the requirements.

There is another new folder, the more_events_DM_signal_withelecmuon_btagging_test since 10000 events aren't enough for what I need. Then I'll test the b-tagging again with the gen particles and the jet->Flavor option. The files inside the folder (and in the Delphes folder) are more_events_DM_signal_withelecmuon_btagging_test50, more_events_DM_signal_withelecmuon_btagging_test70, more_events_DM_signal_withelecmuon_btagging_test90 and I've used the macro btagging_namao.C to check the b-tagging. These files will be used for the signal number of events prediction.

  • The signal was simulated using 30000 events for each mass of the dark Higgs, with the following luminosities (fb^-1):
    • m_{hs} = 50 geV - 41,24;
    • m_{hs} = 70 geV - 50,64;
    • m_{hs} = 90 geV - 61,71.

Tried to solve the NLO problem on MadGraph, but my knowledge doesn't allow me to do it. Must also talk to Thiago about it. I've even tried to use what CLASHEP taught me, but it didn't worked because there is something else missing.


06/06/2019

I've had some problems generating processes at NLO or using access2, but everything seems to be solved. Apparently the macro for analyzing the events is finished (I HOPE SO), and the results i've already got concerning the dark Higgs events are different from the paper results. I need to show everyone this results at the weekly meeting. 300k events of dark Higgs for each of the masses 50 GeV, 70 GeV and 90 GeV were generated using access2, and the ROOT files are there, and also at my folder /Delphes-3.1.4 and are called 300k_*gev.root.

The NLO processes are difficult to generate, and I'm trying to not follow the paper on this, but it seems very hard. The processes at NLO are p p > b b~ vl vl~ [QCD] and p p > b b~ vl l [QCD]. I think that the processes p p > z z [QCD] and p p > z w [QCD] with their decays specified in the madspin card, need to be added by hand, since madgraph at NLO is not giving them in the feynman diagrams. The process p p > b b~ vl vl~ QED<=4 [QCD] gave ALL the Feynman diagrams, but the software couldn't run (apparently there was some problem in the param_card.dat).

At the meeting (06/06/2019)

I've generated the processes p p > b b~ vl vl~ and p p > b b~ vl vl~ QED<=4 at LO, and the diagrams were the same as in the NLO ("the same" reads as their compatible orders). I really need the QED<=4 part of the syntax, otherwise, diagrams with photons/Higgs will not be accounted for.


I must talk to Thiago about all the problems with the BG at NLO and the signal.

I'm also checking the macro, and there must be a double counting in the b-tagging part. No idea how to solve it. The efficiency of the b-tagging must be inserted in the code, but Thiago will answer this. The graphs for the dark Higgs masses of 50, 70 and 90 GeV are (with a landau fit to get the most probable value of the invariant mass of the fat jets, since it worked better than the other ones (I forgot to put the dark Higgs masses in each of them))

  • dh_results.png:
    dh_results.png

The problem at NLO was solved using the newest version of MadGraph (2.6.5), with some changes in the files CT_interface.f. When an process is generated, at the output folder there is the following path /MG5_aMC_v2_6_5/output_folder/SubProcesses/P0_something_something/V0_something_something/CT_interface.f. Inside those files, approximately at the line 650 (can be higher or lower), two quantities that must be changed are (it's just an extra zero that must be erased) ABSP_TMP(I)=0.E0+0_16 and P_TMP(I,J)=0.E0+0_16. The zero after the letter E must be erased!! Otherwise, the check_poles test will fail.

19/06/2019

I've been talking to Thiago to rewrite the macro, and apparently it's good now, but I'm having problems when applying it to the signal (lots of fat jets have only one jet inside it). The problem seems to be that I forgot to change the ParameterR variable inside the Delphes card when generating the 300000 events for each mass of the dark Higgs. I've simulated 30000 events of the dark Higgs signal processes for m_{d_{H}} = 50 GeV to check it. EVERY IMAGE ABOUT THE DARK HIGGS IN THIS TWIKI IS WRONG!!!

The name of the folder is dh_50gev_30000_paperjetreco, and I'm using the macro (inside Delphes folder) V3_terceiromais_dark_Higgs.C. I'll also change the way the jets are generated: for now, the jets are reconstructed only using the charged particles (tracks); I'll use the particle flow that includes everything (in Delphes it's the EFlow array).

In the mean time, Thiago taught me how to write a .root file that contains all the canvases I need, instead of running the macro over and over again (extremely helpful). I also learned how to properly handle data structures in C++, and that save so much time and effort. It makes everything more clear.

Apparently the change in ParameterR solved the problem. I'll generate 300k events for the 50 GeV dh to check it again. The files will be named certo_300k_50gev.root and results_certo_300k_50gev_prunedmass.root (I'll also use the trimmed and the "normal" fat jet masses). If this work, the masses of 70 GeV and 90 GeV will be tested!

01/07/2019

It seems that everything is working out. My b-tagging is almost exactly like Delphes' one (I only need to include the counting of light quarks, gluons and c jets). I'm getting pretty awesome results that match the paper. Every macro that has the name V.C* was used to build and test the b-tagging. V1* and V2* failed in every aspect. V3* started to work, but it was a very crude b-tagging, counting some events (read fat jets) that didn't needed to be there. V4* removed those events, but it was still very crude. V5* was used to test the cut in DeltaR (b,j) and worked perfectly. V6* was already comparing my b-tagging with (spoiler alert!) Delphes B-Tag variable that is working. V7* is applying the right b-tag efficiency, using a random number to set it (just like the delphes does). V8* is doing the b-tagging for every jet in the events (light quark and c) with a misidentification efficiency, but it isn't working properly, since the results are very different than those with Delphes b-tag variable.

The signal events are the files with certo_* .root that were generated following these parameters:

  • All of them had m_{\chi} = 100 GeV;
  • 300000 events on madgraph run for each;
  • All of them were rescaled to L_{r} = 40 fb^-1;

Mass of the particles Luminosity Scale factor
m_{d_{H}} = 50 GeV, m_{Z'} = 1100 GeV L = 411,6244 fb^-1 0.0972
m_{d_{H}} = 70 GeV, m_{Z'} = 1100 GeV L = 506,0932 fb^-1 0.079
m_{d_{H}} = 90 GeV, m_{Z'} = 1100 GeV L = 616,9749 fb^-1 0.0648
m_{d_{H}} = 70 GeV, m_{Z'} = 625 GeV L = 115,4434 fb^-1 0.3465
m_{d_{H}} = 70 GeV, m_{Z'} = 1700 GeV L = 2633,9161 fb^-1 0.0152
Besides that, I've figured out why Delphes b-tagging wasn't working, and that is because I don't pay atention to anything. I was overwriting the Reco Jet information with the Gen Jet information, and the Reco Jet weren't being b-tagged, only the Gen Jets. Now I'm generating again the files mais_certo_* .root with the same prescription as before, and for the m_{d_{H}} = 50 GeV I've already got very nice results. I've also compared my b-tagging with these results, and they are very similar (as they should!). Don't forget some nice pictures! The next step is to talk to Thiago and check what I'm going to vary to get some efficiencies. Also, I must generate again some backgrounds in NLO to compare with the paper.

Now that all my courses are finished (I hope so) I'll have more time to study C++, python, QFT and HEP.

Below are the images comparing my signal (left images) with the paper signals (right images took from arXiv:1701.08780 [hep-ph]):

  • comp_dh.png:
    comp_dh.png

  • comp_Z.png:
    comp_Z.png

02/07/2019

I'm doing all the backgrounds again, and setting up a macro to analyze them, and a signal file just for fun. Apparentely, my old BG results were wrong, since some cuts were missing (lepton isolation variable) or were wrong (I was not getting the fatjet pt or eta after trimming), and I was using the wrong fatjet mass (Delphes default mass before trimming or pruning). The updated/corrected macro for the backgrounds is mais_certo_ALL_backgrounds_combined_rightratio.C, and a test file was already generated with the name bgchange_mais_certo_ALL_bg_sig_90gev_withiso.root where I tested the signal with m_{d_{H}} = 90 GeV and some old background process done at LO with the names withelecmuon_*. The new result is in the image below (they have the same luminosity, and the figure on the right have a k-factor of 4):

  • bg_certo_comp.png:
    bg_certo_comp.png

The image on the left was took from arXiv:1701.08780 [hep-ph].

The new BG files will be named pp2tt_50000_1x.root, pp2wbb_w2lv_50000_1x.root, pp2wwbb_50000_1x.root, NLO_pp2zbb_z2vv_50000_1x.root, NLO_pp2zz_50000_1x.root and NLO_pp2wz_50000_1x.root. I've to remember that there are copies of these files in access 2, including the signal ones that I've talked about on 01/07/2019.

04/07/2019

I've writen a macro to analyze all the 6 .root files for the BG. It's named NLO_mais_certo_ALL_backgrounds_combined_rightratio.C. I still need to corretly get only the prompt electrons/muons for the lepton veto, but I need to talk to Thiago about this first, since I don't wnat to use the GenParticle branch of the Delphes tree (It's EXTREMELY slow!!!). The files ATLAS_kfactors_NLO_bgchange_mais_certo_ALL_bg_sig_90gev_withiso.root and ATLAS_kfactors_NLO_changed_mais_certo_ALL_backgrounds_mJ.pdf have been created using all the right BG, and different k-factors for each background. The results for the number of events of each BG and the ATLAS correspondent comprising the fat jet mass range of 80 GeV to 280 GeV are on the table below:

Result Diboson Z+bb W+bb tt
BG (ATLAS) 1.20 3.80 2.48 4.83
BG (mine) 0.53 2.91 0.79 1.67
k-factors 2.3 1.3 3.2 2.9
The comparison of my BG with the paper is in the image below:

  • comp_bg_NLO.png:
    comp_bg_NLO.png

The image on the left was took from arXiv:1701.08780 [hep-ph].

An important factor comes from the madgraph simulation at LO, that accounts for some phenomena that I don't know which it is. Apparentely, if one tries to find the luminosity from the number of generated events and the process cross section, it's 1.2 times smaller than the result of MadGraph. Unfortunately, at NLO, MadGraph don't give any luminosity, and it must be calculated usgin this procedure. Using that factor of 1.2 only in the NLO generated BG processes, the new k-factors are:

Result Diboson Z+bb W+bb tt
BG (ATLAS) 1.20 3.80 2.48 4.83
BG (mine) 0.58 3.50 0.79 1.67
k-factors 2.1 1.1 3.2 2.9
And the new plot is:

  • xsec_factor_bg.png:
    xsec_factor_bg.png

09/07/2019

To calculate the efficiencies, I still don't know exactly what to do. I'll perform some tests and talk to Thiago later this week.

To calculate the efficiencies I'll need all processes cross sections, that will be on the table below:

Process x-sec [pb]
Dizz 0.002563
Diwz 0.001477
Diww 0.060691
Top 0.062992
W 0.221830
Z 0.137934
50 0.874707
70 0.711139
90 0.583691
The macro punzi_eff_bg_separated.C was created for that, and I'm calculating 2 types of efficiencies: in one of them, I'm summing (BGeff*BGxsec) and then multiplying it by 40000 (that is the luminosity required); in the other one, I'm summing (nBGfin/nBGini), than multiplying by the sum of (BGxsec) and than multiplying by 40000. The punzi eff is being calculated as sigeff/sqrt(1+BG), where BG is (nBGfin/nBGini)*BGxsec*lumin. I'm having some numerical problems since C++ doesn't want to show me the values of the calculations because nfin/nini << 1. I'll try some procedures to make it work.

12/07/2019

In the last days I've been calculating the efficiencies of some cuts in the events. The files are named as punzieff_something, and the results are pretty impressive. I still need (in some of the graphs of eff_BG x eff_sig) to find the right values of the cuts for each dot.

Besides that, I've talked to Thiago, and we decided to simulate more processes for the dark Higgs signal with different m_{d{H}} and m_{Z'}. The tables below comprise the values of the cross sections for those processes (300k events were simulated for each pair of masses, using access2 from gridunesp):

*m_{Z'} = 1100 GeV, m_{\chi} = 100 GeV:

m_{d_{H}} [GeV] approx x-sec [pb]
50 0.874
60 0.788
70 0.711
80 0.643
90 0.583
100 0.530
110 0.477
120 0.410
130 0.325
140 0.222
150 0.118
160 0.0313
170 0.0059
180 0.0037

*m_{d_{H}} = 70 GeV, m_{\chi} = 100 GeV:

m_{d_{H}} [GeV] approx x-sec [pb]
500 4.443
625 3.116
1000 0.961
1100 0.711
1500 0.230
1700 0.137
2000 0.065
2500 0.020
3000 0.0068
3500 0.0024
4000 0.00083

20/07/2019

First of all, Thiago said about writing a paper!!! I've already checked the works that cited the "Hunting the dark Higgs" (arXiv:1701.08780) paper, and must talk to Thiago about it.

The macro punzi_eff_bg_separated.C is being used to analyze cuts in the fat jet mass for the m_{d_{H}} = 70 GeV process. I'll do that for the other masses as well. New macros that are called punzi_eff_metcut.C, punzi_eff_etacut.C, punzi_eff_ptcut.C, and punzi_eff_lepcut.C were created to analyze every other cut. New cuts can be thought of at any time!!!

The macro V9_terceiromais_dark_Higgs.C was used at the processes with different m_{d_{H}}. An interesting thing happened at the fat jet ivariant mass from those processes: it had its peak displaced by about 10% for each mass (by comparing the m_{d_{H}} set mass with the m_{J} fitted mass). The graph from the comparison of those masses is in the image below:

  • mass_comp_ALL.png:
    mass_comp_ALL.png

I've talked to Pedro and we concluded that the fat jet mass resolution is getting worse since the greater the m_{d_{H}} set, more non-collinear are the b's. To check that, I've tightened the MET cut (from 500 to 700), and we saw that the resolution got a bit better. A study about the efficiency of the cut may help with this problem. Also, this seems to be a characteristic of the jets and fat jets invariant mass resolution (the difference of m_{J} and m_{d_{H}} scales with the mass).

The macro V9_terceiromais_dark_Higgs.C was also used at the processes with different m_{Z'} and the results were just as I've expected.

23/07/2019

All the macros are working perfectly. The macro punzi_eff_lepcut.C needed to have the number of events exchanged, since the restriction in the lepton p_T made the cut looser (and the relative efficiencies were getting higher than 1). I must talk to Thiago about this.

I'm using the fat jet pt macro in all the other dark Higgs masses to see what happens to the efficiency. Luckily, it will change something. I'll also use the macro of the fat jet mass to analyze the events.

29/07/2019

I'm running all the macros again with the Z' mass variations, only to check what is happening. I still need to plot signal and background MET one over another to check if the MET efficiency is alright. If there is something missing, I can generate events with a looser cut in MET and merge it with the ones I already have.

30/07/2019

After running the macro to calculate the efficiencies of the MET cut, no important result was found, in the sense that the significance only goes down. I'll check why that is happening. Also, me and Thiago saw that some of the processes generated by the MadGraph have a Higgs or a dark Higgs mediating the production of two b's and two DM particles. I'll see the lagrangian implemented inside the UFO files.

I've created a macro to plot the MET and the fat jet p_T in the same histogram to see what is happening with the Punzi significance. Thiago said that is important to have the BG separated being compared with the signal, but also the BG all together being compared with the signal. I've only rescaled the histogram by each of the processes cross sections. Later, I can normalize everything to see what happens. Thiago have showed me a variable that might be interesting to plot and cut it in some interesting region: \sqrt{2 * p_{T}^{J} * MET * (1-cos(\Delta \phi (J,MET)))}.

At tuesdays and wednesdays I'll start participating in the dark Higgs technical chat (tuesdays) and the monojet meeting (wednesdays). This will be very nice to learn a lot of new stuff with people that work in the same stuff that I'm doing right now.

I've generated histograms of the transverse mass, and it really seems that it can be cut to increase the significance of the search!!!

01/08/2019

Yesterday I've listened to the monojet meeting, and they were talking about some discrepancies in some monojet samples cross sections, that could be caused by the use of a wrong PDF.

Today, I've created a macro to analyze the Punzi significance of cuts in the transverse mass, and it really seems that awesome results will come out. I'll show this in today's SPRACE physics meeting. I think I should run the macros to the other masses as well.

The results from the Punzi significance for cuts in the transverse mass show a very small improvement in the significance for one of the cuts, and I'll show it in todays meeting. It's not a great gain, but it's something.

05/08/2019

At the 01/08 meeting, Pedro sugested that I should cut the M_{T} variable in a window around the Z' mass. Maybe I'll heva some gain in efficiency here. I don't remember exactly when, but I have written a macro to plot the cross sections of the processes for different m_{d_{H}} or m_{Z'}. It' named plot_dh_xsec_wo_cuts.C (since it only has the MadGraph +Pythia+Delphes generation cuts), and the files created were xsec_mdh_var_nocuts.pdf and xsec_mdh_var_nocuts.pdf.

I'll begin to create folders for each different macro that I'll use. All the files related to a test of that macro will be in subfolders inside the "master" one related to that macro. I hope that all my files get more organized in this way. The folder name will be exactly like the macro name.

06/08/2019

Yesterday I've showed to Thiago the results of the Punzi significance for the transverse mass cuts for the file with m_{d_{H}} = 70 GeV and m_{Z'} = 1100 GeV, and it wasn't very satisfying. Thiago asked me to do the same with a bigger m_{Z'} file, and I did for the 2000 GeV one, which got us even more confused. I'm using the LHE files to calculate the p_{T} of the sum of the four-momentum of the DM particles, and it's exactly like the MET in the root file corresponding to this mass.

The macro name is read_fourvector.C, and there is a folder with the same name, with all the files/canvases of this macro.

09/08/2019

I've presented a summary of the CWR paper yesterday at the meeting, and already sent my comments/doubts to Pedro.

Thiago helped me to realise that there was an error in the macro read_fourvector.C. I've fixed it, and inside the folder read_fourvector, there is another folder with the name macro_certa, where all the right plots are, including a figure that compare the MET histograms for different values of m_{Z'}.

Today me and Thiago had a talk about what should I do in the next few weeks, and these are my tasks:

*Build a table with all the number of events; *Build a cut flow for some (all) of the m_{D_{H}} values; *Learn how to use ROOT TLimit or TRolke (or both);

12/08/2019

I've modified the macro punzi_eff_bg_separated to find the best window around the best center value (that I got through the highest Punzi significance). The files are inside the folder best_punzi_center_varying_binsize in the folder named after the macro. Now, I'll modify it to find the number of events of signal and BG after all the cuts, in the region of highest Punzi significance for the center and the window, and it will be named punzi_eff_table_cutflow.C.

There were some errors in the tables, and I'll se what is happening tomorrow. But it seems that I'm at the right way.

14/08/2019

The first table is below. The number of events is normalized to a luminosity of 140 fb^{-1}

m_{d_{H}} [GeV] N_{S} N_{B} m_{J}^{center} [GeV] m_{J}^{window} [GeV]
50 77.8346 59.5086 50 20
60 101.74 47.6723 58 16
70 136.038 68.2943 66 20
80 166.037 122.624 74 28
90 172.762 136.005 82 28
100 175.813 140.486 98 36
110 174.741 131.641 110 40
120 166.938 105.836 112 36
130 122.976 87.6493 122 32
140 104.722 136.891 130 44
150 57.0858 165.731 146 44
160 18.6357 271.383 150 72
170 3.43249 250.224 160 64
180 2.07056 218.363 166 56
Some examples of cutflow are


m_{d_{H}} = 50(50\pm10) [GeV] N_{S} N_{B} s_{Punzi} N_{S}/(sqrt{N_{S}+N_{B}}) novo BG s_{Punzi}
MET 1163.78 7383.93 0.00031072 12.5877 0.000298948
Lep 1163.78 7383.93 0.00031072 12.5877 0.000298948
Fat jet p_{T} 1159.02 7374.31 0.000309648 12.5468 0.000297998
Fat jet eta 1152.9 7324.61 0.000309039 12.5216 0.000297501
Fat jet mass 77.8346 59.5086 0.000197912 6.64154 0.000187386

m_{d_{H}} = 50(50\pm10) [GeV] N_{S} N_{B} s_{Punzi} N_{S}/(sqrt{N_{S}+N_{B}})
MET 1163.78 7383.93 0.00031072 12.5877
Lep 1163.78 7383.93 0.00381325 12.5877
Fat jet p_{T} 1159.02 7374.31 0.00380012 12.5468
Fat jet eta 1152.9 7324.61 0.00380592 12.5216
Fat jet mass 77.8346 59.5086 0.00275026 6.64154


m_{d_{H}} = 110(110\pm20) [GeV] N_{S} N_{B} s_{Punzi} N_{S}/(sqrt{N_{S}+N_{B}}) novo BG s_{Punzi}
MET 917.668 7383.93 0.000450836 10.0718 0.000433755
Lep 917.668 7383.93 0.000450836 10.0718 0.000433755
Fat jet p_{T} 912.289 7374.31 0.00044848 10.0218 0.000431608
Fat jet eta 905.982 7324.61 0.000446863 9.98628 0.000430179
Fat jet mass 174.741 131.641 0.000579978 9.98305 0.000546206

m_{d_{H}} = 110(110\pm20) [GeV] N_{S} N_{B} s_{Punzi} N_{S}/(sqrt{N_{S}+N_{B}})
MET 917.668 7383.93 0.000450836 10.0718
Lep 917.668 7383.93 0.00381325 10.0718
Fat jet p_{T} 912.289 7374.31 0.00379336 10.0218
Fat jet eta 905.982 7324.61 0.00379966 9.98628
Fat jet mass 174.741 131.641 0.00535427 9.98305


m_{d_{H}} = 180(166\pm28) [GeV] N_{S} N_{B} s_{Punzi} N_{S}/(sqrt{N_{S}+N_{B}}) novo BG s_{Punzi}
MET 9.26213 7383.93 0.000587774 0.10772 0.000565505
Lep 9.26213 7383.93 0.000587774 0.10772 0.000565505
Fat jet p_{T} 9.19162 7374.31 0.000583673 0.10697 0.000561715
Fat jet eta 9.14414 7324.61 0.000582592 0.106777 0.00056084
Fat jet mass 2.07056 218.363 0.000707119 0.13946 0.000698256

m_{d_{H}} = 180(166\pm28) [GeV] N_{S} N_{B} s_{Punzi} N_{S}/(sqrt{N_{S}+N_{B}})
MET 9.26213 7383.93 0.000587774 0.10772
Lep 9.26213 7383.93 0.00381325 0.10772
Fat jet p_{T} 9.19162 7374.31 0.00378668 0.10697
Fat jet eta 9.14414 7324.61 0.00380635 0.106777
Fat jet mass 2.07056 218.363 0.00491111 0.13946


I can't understand why the Punzi significance or the number of events statistical significance is getting lower after each cut.

19/08/2019

Today I've been working on the suggestions everyone gave me at last thursday's SPRACE physics meeting. I'm still not sure about the Punzi significance getting lower, and I'll Thiago to show him these results. I've also changed the calculation of BG for the Punzi significance to the expression that Ana and Tulio are using that is the sum of the efficiency times the cross section times the luminosity of each BG separately, but it didn't changed much in the results. I'll talk to someone about this and check what is the right expression.

I've created a new macro called punzi_eff_table_cutflow_lepbef to check what happens if I do the lepton veto before the MET cut, and how this changes the Punzi significance.

I've created a macro to test the TLimit class from ROOT. It's named tlimit.C and it uses the ROOT file 140_fb_newbin_xsecfactor_ATLAS_kfactors_NLO_bgchange_mais_certo_ALL_bg_sig_70gev_withiso.root since it already have 140 fb^{-1} of luminosity and it has BG + Sig to test. I'm still very confused about this class, and what its outputs represents for the dark Higgs model. But I've only tested with one mass, and I must do it to the other masses as well.

I'm doing the poster for ENFPC, and I'll get some stuff from the previous poster. I might put something about the validation of the model (following the dark higgs paper) and some of these efficiencies/significances results.

20/08/2019

In the folder NLO_mais_certo_ALL_backgrounds_combined_rightratio are the histograms that compare the fat jet mass of BG only, signal only and BG+sig for all the m_{d_{H}} values. I'll use those to test TLimit and TRolke. The files are named 140_fb_newbin_xsecfactor_ATLAS_kfactors_NLO_bgchange_mais_certo_ALL_bg_sig_"+m_{d_{H}}+"gev_withiso.root.

I've created a macro to test the TRolke ROOT class, that is named trolke_example.C, that is in its ROOT class reference. I'm still trying to figure out what to do with this, and what efficiencies should I use in my case. Also, what is the ratio between signal and BG regions.

27/08/2019

Today I've talked to Thiago, and I'm testing the TLimit ROOT class only using some number of events. For now I've done for 3 events, 5 events and 10 events, in 8 different cases: (sig, bg, data) = (0,0,0); (n,0,0); (0,n,0); (0,0,n); (n,n,0); (n,0,n); (0,n,n); (n,n,n). Actually, I need to use 0.0000000001 events to represent 0, since I was getting some error messages.

I'll start doubling the last case data number of events, to see what happens (I'm expecting that CLs and CLsb will increase as n is higher).

29/08/2019

I've tested the TLimit class, and it works reasonably good. Using it, I could set exclusion masses for d_{H} and Z'. The table for the number of events for each m_{Z'} is (the luminosity was set to 140 fb^{-1})

m_{Z'} [GeV] N_{S} N_{B} m_{J}^{center} [GeV] m_{J}^{window} [GeV]
500 228.074 68.2943 66 20
625 216.908 68.2943 66 20
1000 142.014 68.2943 66 20
1100 136.038 68.2943 66 20
1500 128.621 68.2943 66 20
1700 101.867 68.2943 66 20
2000 67.6939 68.2943 66 20
2500 28.3811 68.2943 66 20
3000 11.3235 68.2943 66 20
3500 4.58453 68.2943 66 20
4000 1.72912 68.2943 66 20
The graphs are below. Everything above the dashed lines is excluded with more than 95% CL.

  • cls_mdh.png:
    cls_mdh.png

  • cls_mzp.png:
    cls_mzp.png

In the first plot it was set m_{Z'} = 1100 GeV and m_{\chi} = 100 GeV. In the second plot it is m_{d_{H}} = 70 GeV and m_{\chi} = 100 GeV.

06/09/2019

This was ENFPC 2019 week. While there, I learned how to make my DM particle look like a neutrino for the MadGraph misset generation cut. Now I can run only 5k events with a MET restriction, instead of 300k events without MET restriction, and obtain the same result. The change was in the SubProcess/setcuts.f file, where the line if (abs(idup(i,1,iproc)).eq.1000022) is_a_nu(i)=.true. needs to be added after the same lines but with the neutrinos PDG codes. This will improve my results a lot.

Today I did what Thiago told me to a couple of weeks ago. For each value of m_{d_{H}} and m_{Z'}, I needed to find the number of events N that gave me almost 95% CLs (actually 1-\<CLs\>) using the TLimit ROOT class, and calculate the cross section for that value using the formula \sigma = N/(lumin x eff). Then I would calculate the cross section using the number of events that I've found for each m_{d_{H}} or m_{Z'} concerning each BG, and compare both. The graphs are below. Everything that is above the blue line, is excluded with 95% certainty.

  • mdh_xsec_cl.png:
    mdh_xsec_cl.png

  • mzp_xsec_cl.png:
    mzp_xsec_cl.png

10/09/2019

Today I'll start generating new mass points for the CLs plot, to get the exact point in which the curves will cross each other, for different m_{d_{H}} and m_{Z'}. Also, I'll check the difference in the CMS cards with and without pile-up from Delphes.

I'll run the events locally, since they can be running while I do other stuff. Only a couple of thousands of events (10k is enough I guess) will be generated for each point, since I've learned how to make the misset cut in the generation of the signal processes.

In the folder Template/LO/SubProcesses/ inside the MG5 folder, I've found the file setcuts.f. Maybe, changing that file will change all the other ones that will be newly created. I've tested it, and it actually works! This will make everything easier, even when running thing in access2, since ther I can't change anything inside a folder before running a given process.

I've generated the mass point m_{d_{H}} = 155 GeV, and, since all d_{H} files in the Delphes folder are named as mais_certo_300k_*gev.root, I'll name the root file as mais_certo_300k_155gev.root (even though it has only 10k events). The cross section of this process was 0.00567 pb (because of the MET cut).

The step-by-step from going to the root file, to a point in the CLs graph is as follows:

  • Use the macro V10_terceiromais_dark_Higgs.C to check the m_{J} variable;
  • Find the best value of the Punzi significance for the m_{J}^{C} and m_{J}^{W} using the macro punzi_eff_bg_separated.C;
  • Get the number of events of signal and BG in that region, and the efficiency using the macro punzi_eff_table_cutflow.C;
  • Use the macro tlimit_mdh_mzp.C with those number of events to find 1-CLs;
  • Find the 95% number of events for that given N_{BG};
  • Convert everything to xsec using the expression \sigma = N/(lumin x eff), and multiplying the efficiency by 1.2 * xsec (with MET cut)/xsec (without MET cut), with the macro tlimit_to_xsec_mdh_mzp.C.

The informations for the mass point 155 GeV are:

m_{Z'} [GeV] N_{S} N_{B} m_{J}^{center} [GeV] m_{J}^{window} [GeV] Efficiency
155 37.507 271.383 150 72 0.0873921
To rescale the cross section, I've calculated it by simulating in MadGraph the same process without the MET cut (since all the other points in the graph has this restriction), and the result was 0.06797 pb. An important point, is the 1.2 factor from madgraph in the final result. To find it consistently with the other points, this factor needed to be taken into account.

The 155 GeV mass point didn't touch the 95% exclusion limit, and I'll try to simulate the 156 GeV one (it was already very close).

I've found the 1.2 factor inside the MadGraph folder, and it's just a grep "1.2" *.f inside Template/LO/Source to see it. This factor doesn't appear in the NLO processes.

11/09/2019

I've already simulated the 156 GeV m_{d_{H}} without MET cut to calculate the cross section, and the result was 0.05776 pb. Today I'll also do the same thing with the Z' masses.

The process with the MET cut have a cross section equal to 0.004846 pb. The file in the Delphes folder is called mais_certo_300k_156gev.root.

The informations for the mass point 155 GeV are:

m_{Z'} [GeV] N_{S} N_{B} m_{J}^{center} [GeV] m_{J}^{window} [GeV] Efficiency
156 25.1023 139.905 150 36 0.0691804
The plot with the new points is below

  • xsec_mdh_155_156.png:
    xsec_mdh_155_156.png

I've simulated the mass point m_{Z'} = 2750 GeV, that have a cross section equal to 0.01159 pb (without MET cut) and 0.002832 pb (with MET cut). The file in Delphes folder is named mais_certo_300k_2750gev.root. The results for this mass point are

m_{Z'} [GeV] N_{S} N_{B} m_{J}^{center} [GeV] m_{J}^{window} [GeV] Efficiency
2750 17.5773 68.2943 66 20 0.0891869
The plot with the new point is below

  • xsec_mzp_2750.png:
    xsec_mzp_2750.png

The new 1-CLs plots are below

  • cls_mdh_com_intermediarios.png:
    cls_mdh_com_intermediarios.png

  • cls_mzp_com_intermediarios.png:
    cls_mzp_com_intermediarios.png

17/09/2019

Today I've studied the difference between the Delphes CMS cards with and without PU. Apparently, the particles (including the ones coming from PU) have no correction until the end of the calorimeters. After that, the first correction (PU track correction) is applied, which will interfere in the jets (that have another correction: jet PU correction) and in the isolation of electrons, muons and photons. This means, that almost all of the Delphes outputs will have corrections in the end. An interesting feature is that there isn't a module for the FatJets. I guess I'll introduce it by hand and check what is the difference (in practice) to the card without PU. There is a flowchart that shows the steps that Delphes makes while working. It's inside my main folder, and it is named delphes_PU_finished.png.

I've also looked inside the Pythia files to write very clearly what are the processes that it's seeing while simulating the signal processes. In MadGraph syntax, the following 6 different processes were found:

  • q ~q > H* > hs n1 n1, hs > b ~b;
  • q g > H* q, H* > hs n1 n1, hs > b ~b;
  • q ~q > Z'* > hs n1 n1, hs > b ~b;
  • q g > Z'* q, Z'* > hs n1 n1, hs > b ~b;
  • q ~q > hs* hs, hs* > n1 n1, hs > b ~b;
  • q g > hs* hs q, hs* > n1 n1, hs > b ~b;

Particles tmarked with * are the ones that have its mass very close to the m_{Z'} set in MadGraph param_card. It's interesting that only those particles are decaying to the 2 DM, even when we have 2 intermediate dark Higgses.

Today I've started looking the TLimit class to see how to put uncertainties in the BG. Apparently, it's just use the syntax TConfidenceLevel * TLimit::ComputeLimit (Double_t s, Double_t b, Int_t d, TVectorD * se, TVectorD * be, TObjArray * l, Int_t nmc = 50000, bool stat = false, TRandom * generator = 0), where s, b and d are signal, BG and data, respectively, se and be look like the errors per bin of the signal and BG histograms (in my case that will be only a number), I don't know what is l, nmc is the number of montecarlo simulations to do, stat is dependent on the shape of the number of events (if I want to count one tail of the distribution, or none of them) and the generator is the random generator. I've also discovered how to find the results of TLimit with +/- 1 and +/- 2 sigmas (the default is 0, the average). I'll try everything out.

I'm having some problems to implement the BG errors in TLimit. The classes TVectorD and TObjArray are a bit weird, and I'll talk to Thiago tomorrow to see how to do it. In the mean time, I'm thinking if it's suitable to only vary the number of events in the BG for some different values like 10%, 20%, and some others, and see how the CLs variable will change, and how this affects the cross sections.

I can do the same thing with the -2, -1, 0, 1 and 2 sigma expected CLs, converting everything to number of events of signal and than to cross sections (would this be a brazilian plot?).

18/09/2019

Today me and Pedro checked the class about statistics. Everything seems clear, and the exercises were extremely helpful to understand different types of CL calculations, especially when it talked about the CLs method (since I had no clue of what it was doing). I'll prepare something for the next week meeting.

I'm also working on a macro to write the results of TLimit with 1 and 2 sigmas. It's named as tlimit_testes.C. The other important macro of today is the tlimit_tlimit.C, where I'm testing the uncertainties in the signal and BG values. Unfortunately, the TLimit calculation with histograms without uncertainties is not equal to the TLimit calculation with double values. I've checked that, and what was happening was that I forgot to turn off the statistical uncertainties when I was comparing both of them. I'll ask Thiago if it's important to account for both uncertainties right now.

I've created the macro brazilian_not_brazilian_plot.C with the 1/2 sigmas values, and already updated the macro tlimit_to_xsec_mdh_mzp.C to plot that graph. It is very impressive, but I don't know how to make it beautiful for now. I'll show this plot and some results at tomorrow's meeting.

20/09/2019

Yesterday, while talking to Pedro, we saw a little divergence in my macros. I was using MG5 luminosity to perform all the calculations until the tlimit macros. Unfortunately, the correct luminosity is achieved by using the expression L = N/xsec. But madgraph gives me 2 numbers for each of those quantities. The first one is the number of events in the run_card (that I set to do the Monte Carlo simulations), and the xsec assigned to the process concerning thi number of events and a fixed luminosity. The other ones are the number of events after pythia matching/merging and the xsec assigned to that number of events, that provide the same luminosity as the numbers before. Since Delphes is using the later, I needed to change the luminosity that I was calculating using Lnew = N(after matching/merging) / xsec (after matching/merging). This shall not change the results significantly, but it will make the cross sections calculated through TLimit and the MG5 ones more consistent.

I've already compared the m_{J} graphs from the paper with the ones using this new quantity to validate this case, and it seems pretty good.

The new xsecs (after matching/merging) for the different values of m_{d_{H}} are:

m_{d_{H}} [GeV] approx x-sec [pb]
50 0.536
60 0.486
70 0.435
80 0.396
90 0.359
100 0.325
110 0.291
120 0.251
130 0.198
140 0.135
150 0.0723
155 0.00368 (w/ MET cut)
156 0.00311 (w/ MET cut)
160 0.0192
170 0.00361
180 0.00223
I still need to do the processes for 155 and 156 without the MET cut, but with pythia and Delphes to get the merged xsec.

The new xsecs (after matching/merging) for the different values of m_{Z'} are:

m_{d_{H}} [GeV] approx x-sec [pb]
500 3.030
625 2.069
1000 0.594
1100 0.435
1500 0.136
1700 0.0797
2000 0.0369
2500 0.0114
2750 0.00169
3000 0.00378
3500 0.00131
4000 0.000462
I still need to do the process for 2750 without the MET cut, but with pythia and Delphes to get the merged xsec.

All the BG xsecs are all right, I just need to put the 1.2 factor inside their k-factors.

23/09/2019

The cross sections for the processes with m_{d_{H}} of 155 and 156 without MET cut are 0.0422 pb and 0.0354 pb. For the m_{Z'} 2750 process it is 0.00643 pb.

04/10/2019

I'll stay at CERN for almost 3 months. While in here, I am going to work with Maurizio Pierini with Deep Learning applied to High Energy Physics. I'm reading the Deep Learning book (which is available at the web) and it seems pretty good for now. I must learn what is a Generative Adversarial Network (GAN) that is the last chapter of the book. Also, I am searching for some Machine Learning frameworks (mainly Keras with Tensorflow as backend, but I might change it to Torch since is what Raghav (Maurizio student I guess) is using) and how to work with MNIST datasets (the handwritten numbers and the fashion one). Everything is amazing, and I believe that this will be a major step for me.

While here, at fridays I will work on the dark Higgs model. Today I built the brazilian plot graphs again (but now some less ugly ones) that are:

  • brazilian_mdh.png:
    brazilian_mdh.png

  • brazilian_mzp.png:
    brazilian_mzp.png

I was not inserting the right errors in the TGraphAssymErrors and that was a problem. But I've learned it now. I'll use TLimit with systematical errors in the BG now and see what happens. I am going to write a new macro for this, since the one that I was using is gone (something in my backup went wrong I guess). The new macro is named tlimit_tutorial.C.

Thiago have made some test about it and he sent me the file with some results. The file is named as ComparacaoTLimitCombine.xlsx and it is inside my Delphes folder. I'll also save it in the "home" of my flash drive and my external HD.

There are 24 images (12 for m_{d_{H}} and 12 for m_{Z'}) for the variation of systematic uncertainty. I don't understand some of the results, but I must show this to everyone and ask for some explanation. They are in the folder tlimit_tutorial.

To gather better results, I've simulated more mass points for m_{d_{H}}, and I'll do the same for m_{Z'} also. For now the mass points simulated and their respective cross sections are:

With metcut

m_{d_{H}} [GeV] approx x-sec [pb]
142.5 0.01004
145.0 0.00878
147.5 0.00753
152.5 0.00506
157.5 0.00229
Without metcut

m_{d_{H}} [GeV] approx x-sec [pb]
142.5 0.12
145.0 0.1037
147.5 0.0866
152.5 0.05615
157.5 0.02537
The files inside Delphes folder are named as mais_certo_300k_* just for simplicity. For m_{Z'} the xsecs are:

With metcut

m_{Z'} [GeV] approx x-sec [pb]
2050 0.005976
2100 0.005443
2150 0.004983
2200 0.004664
2250 0.004201
2300 0.003878
2350 0.003529
2400 0.003191
2450 0.002887
2550 0.002404
2600 0.002255
2650 0.002037
2700 0.001857
2750 0.001679
2800 0.001537
2850 0.001406
2900 0.001278
2950 0.001150
Without metcut

m_{Z'} [GeV] approx x-sec [pb]
2050 0.03253
2100 0.02939
2150 0.02551
2200 0.0232
2250 0.02019
2300 0.01808
2350 0.01579
2400 0.01433
2450 0.01295
2550 0.01025
2600 0.009138
2650 0.008178
2700 0.007278
2750 0.006527
2800 0.005934
2850 0.005372
2900 0.004714
2950 0.004222

19/02/2020

While at CERN I've got some nice results by using a ConVAE to generate jets. There is a lot of work ahead (learn what are and how to use the energy flow polynomials), but I think we are in the right track.

Concerning the Dark Higgs model, a lot of advances were made. First of all, the brazilian plots of m_{d_{H}} and m_{Z'} are very nice and I learned how to use TLimit with systematic uncertainties in the BG (useful in the future, or to be more familiar with other methods). Thiago gave me access to headtop, and this is helping a lot on the simulation of new data. With it I was able to reproduce (badly) the left image of figure 5 from the "Hunting the dark higgs" (arXiv:1701.08780) paper, below

In headtop, the files steering* and param_card_* inside MG folder are the mass points being used for m_{d_{H}} = 70 GeV. I am doing m_{\chi} = 50 -> 800 GeV (50) and m_{Z'} = 250 -> 4000 GeV (250), 120, 3200, 3400, 3600 (more resolution is needed). The exclusion region applying only the MET > 500.0 GeV cut for L = 150 fb^-1 is almost identical to the paper one for L = 40 fb^-1. With m_{J} cuts, the exclusion region is highly improved, but I don t know if this cut could be used in a real analysis. Also, inside each of the folders for the mass points, called teste_MET400_*, inside the Events folder, where all the root files are, the merged_xsec.C will get the cross sections after merging, that is the right one.

In headtop, inside the Delphes folder, the important macros are ultimate_* for each of the cuts, and every result is inside the folder testezao separated in each cathegory. I am getting the exclusion region for L = 150 fb^-1 (run 2), L = 300 fb^-1 (end of run 3, predicted), L = 1000 fb^-1 (end of run 4, predicted), L = 3000 fb^-1 (end of LHC, predicted).

In my computer there is also a folder named testezao, and the results are in the top folder. Three macros inside the Delphes folder are important plot_top_xsecs.C, plot_top_xsecs_lumi.C, only_plot_top_xsecs_lumi.C. They make these graphs

I am finishing the BEPE report. I am waiting for a feedback from Thiago about the latest version, and then I will need to ask for a letter from Maurizio.

14/04/2020

I've been working on my master dissertation. I am finishing Thiago's corrections. Everything is almost done, there are just a few things to work on. I need to generate the cross section graphs from the files xsecs_* inside Delphes folder in my computer (they are also at my area in headtop, inside the folders named plot_xsecs). The macro to generate those graphs is plot_xsecs_mzp_mdh_mdm.C.

There are other macros to plot the exclusion reginos inside delphes folder that are plot_top_xsecs_lumi_interpolation.C, only_plot_top_xsecs_lumi.C and only_plot_top_xsecs_every_lumi.C. The files that I need are at/ are being created at the testezao/top/ folder. In headtop, every ultimate_* macro inside the Delphes folder is analyzing the events generated by MadGraph. They are named as 400MET_*. There are events for fixed m_{d_{H}} with m_{Z'} and m_{\chi} varying, and for fixed m_{Z'} and m_{\chi}, for m_{d_{H}} and g_{\chi} varying. The results are in agreement with the paper.

  • mzp_mchi_excl_comp.png:
    mzp_mchi_excl_comp.png

  • mdh_gchi_excl_comp.png:
    mdh_gchi_excl_comp.png

01/10/2020

I've been working with the FIMP model. Its simulation using MadGraph + Pythia + Delphes (going to call it just Delphes for simplicity) is not that good, since there are a lot of electrons/muons missing. Eduardo suggested that this is happening because of a generation cut (MadGraph or Pythia or even Delphes) that I am going to investigate. I am also trying to simulate it using the FullSim of CMSSW, but it might be harder.

The Delphes generated events are in the folder ~/MG5_aMC_v2_6_6/FIMP_leptons_varying_width. An important macro is in the Delphes folder under the name Delphes_FIMP_hist.C and some results are in the folder Delphes_FIMP. 10000 events were generated for each decay width value with m_{F} = 130 GeV, m_{s0} = 125 GeV and from run_01 to run_10 the decay length (c\tau) is approximately 100 mm, 200 mm, ..., 1000 mm and runs 11, 12, ..., 15 are 2 m, 4 m, 8 m, 16 m, 32 m respectively. Run_16 is a prompt (setting the decay width to 1) test. In MadGraph the syntax to simulate the events is p p > he~ HE~, and the F decay is being performed by Pythia through changes in the SLHA table. The tracks transversal impact parameter seems to be what I expected, however, the number of reconstructed (generated) electrons + muons that is around 1100 (7600) is much lower than the ~ 20000 expected from the Fs decay. I need to test a few things to find out where the problem is:

  • Change the s0, F mass difference to see if events that contain leptons with higher p_T are sufficient (this would point to a generator cut that might be caused by Madgraph, Pythia or Delphes)
  • Check the number of s0 generated particles (if this number is 20000, this would point only to Pythia or Delphes, I guess)
  • Check the reco and gen leptons p_T (this is not that useful since I would expect only leptons that already passed MadGraph lepton p_T restriction)
  • Check gen particle status (from Pythia log they must be +/- 23)

The number of generated s0 with status 23 is equal to the number of generated electron + muon with status 23. The status of every s0 is equal to 23 (and there are status=0 that I don't know what it means). MadGraph has a selection cut on leptons p_T of 10 GeV. This shouldn't be a problem since MadGraph would then generate events that have laptons p_T around this value. Delphes imposes an efficiency of 0 in reco leptons that have p_T < 10 GeV (great issue concerning the mass difference that I set). Gen leptons have very low/high p_T, and reco ones have p_T > 10 GeV from the cut I described before.

In run_17 I set m_F = 300 GeV and m_{s0} = 125 GeV with c\tau ~ 200 mm to see if the problem with the leptons would continue (I didn't change any generation restriction in Madgraph or Pythia or Delphes). The number of generated s0 and electrons + muons are equal ~ 11000, but they are much lower than the expected (~ 20000). However, the number of reconstructed electrons + muons is around 15000 which is a great improvement. This was due to the Delphes restriction on leptons p_T indeed.

Run_18 is the same as the above, but with only 100 events. I am going to check the hepmc file to see if there is any valuable information about why some s0 and generated electrons/muons are missing. The hepmc file showed that even though every s0 or lepton that comes out of the F decay appear there as a stable particle (status=1), not all of them have the hardest subprcess outgoing status (23). I am going to talk to Thiago about this (mainly about if I should fix it in the hepmc file (which would take a very long time) or how to get the right tracks).

05/10/2020

With respect to my HLT studies, Thiago asked me to apply the PFMET300 trigger in some NanoAOD file (I used a dark Higgs one) and to plot two histograms: one with every event MET and every event that passed the trigger MET; and the ratio of both (passed/every). These three plots and the \phi of the MET of the passed and every event MET (and their ratio) plots are in the images below.

  • met_hlt.png:
    met_hlt.png

  • phi_hlt.png:
    phi_hlt.png

Today I did almost the same, but choosing a jet trigger (I am using the PFJet400 trigger in a drak Higgs NanoAOD file; the dark Higgs events jets are recoiling against a very high MET, so they are boosted and have a very high p_T), and I need to talk to Thiago what he meant with matching offline and HLT jets (the information I have fromt he trigger is just which event passed the trigger or not; I don't have any information about the reconstructed objects during the HLT step).

  • jet_pt_hlt.png:
    jet_pt_hlt.png

  • jet_phi_hlt.png:
    jet_phi_hlt.png

  • jet_eta_hlt.png:
    jet_eta_hlt.png

07/10/2020

I am back at working with FIMP model using Delphes. The histograms of the D_0 of the tracks of electrons/muons (Delphes tracks have a member function called PID since it have access to the generated particle; this might be a matching between generated particle and track (need to investigate it just to make it more clear)) follow the expected. However, I used the files with 10000 generated events with the low F, s mass difference, which doesn't provide a lot of reconstructed electrons/muons. I am going to generate more of those events, but setting m_F = 300 GeV and m_{s0} = 125 GeV, and also changing the decay width. I am also going to set Delphes to reconstruct the F tracks, which is going to be useful for a few of the model's signatures.

In the folder FIMP_leptons_varying_width_bigmdiff_wFtrack, run_01 is for promptly decaying F, run_02 - run_11 are F with c\tau of 100 mm, 200 mm, ..., 1000 mm, and runs 12, 13, ..., 16 are 2 m, 4 m, 8 m, 16 m, 32 m respectively. However, I forgot to put the output HSCP tracks in the Delphes card and all of them failed to give me a root file. I fixed it and I am trying to run again. It might not work though, since the input to the ParticlePropagator module is all the stable particles that Delphes have access to, and F is not a stable particle (as I remember from the tests that I made on 01/10/2020; need to check it again). I built another Delphes module called ParticlePropagatorHSCP where the input are all particles that Delphes have access to (all gen particles), and I am just picking the HSCP tracks to throw into the other modules (tracking efficiency and momentum resolution). I also had to add it in this file ModulesLinkDef.h inside the modules folder. From run_17 to run_22 I have only failed tests (different configurations of the delphes_card.tcl and the modules). Run_23 was the one that worked, and I still need to test the properties of the F tracks.

I tested to run the decay in MadGraph changing the time_of_flight variable of the run_card.dat and it actually worked. I don't need to put the F decay in the SLHA table for Pythia8, which is much easier. In folder FIMP_test_runtofchange are two runs: run_01 that have c\tau ~ 1000 mm and run_02 is a promptly decaying F, and the difference of the D0 of muons (I only set them to decay into muon + s0) of both files is very big (prompt has basically all muons with D0 ~ 0).

In folder FIMP_leptons_varying_width_bigmdiff_wFtrack_corrected, run_01 is for promptly decaying F, run_02 - run_11 are F with c\tau of 100 mm, 200 mm, ..., 1000 mm, and runs 12, 13, ..., 16 are 2 m, 4 m, 8 m, 16 m, 32 m respectively with F being a track. I am going to use these files to test the tracks.

13/10/2020

I checked the status of the leptons and s0 after generating the decay of F in MadGraph (through p p > he~ HE~, (he~ > l- s0), (HE~ > l+ s0)), but I still have particles that have only status 1 and particles that go out with status 23 that is shifted to a status 1 particle. However I found out something more strange than that. Even though a lepton and a s0 share the same F parent, their D0 (I am getting this D0 as the generated particle D0; only the generated particles with PID = 255 or 11 or 13 with status = 23 that I am sure that come from the same F (this part I need to be sure if I am getting the right particles)) distribution is not the same. The s0 D0 distribution is not symmetric (the majority of the values fall in the D0 < 0 side) and the leptons D0 distribution is symmetric with respect to D0 = 0. I need to plot the quantities used to calculate D0 to see what is happening.

  • D0_particles_em_s0.png:
    D0_particles_em_s0.png

16/10/2020

Apparently Delphes was calculating the D0 of the s0 particles wrong. Left plot below shows the comparison of all the leptons and s0 with status 1 that are daughters of F particles (there are 20000 of each), but with D0 being calculated as D0 = (x*py - y*px)/pt of the particles. Just to make sure that this equation is the one that Delphes uses, in the right plot I compared the distribution of the leptons D0 variables that comes from Delphes (lepton->D0) and the one calculated. They are exactly the same, with very few exceptions for very high D0 (I still need to make the same test with higher c\tau runs; this one is for c\tau ~ 100 mm).

  • D0_particles_em_s0_Fparents_D0calc.png:
    D0_particles_em_s0_Fparents_D0calc.png

  • D0_particles_em_s0_Fparents_lepcomp.png:
    D0_particles_em_s0_Fparents_lepcomp.png

Github workflow for making PR in CMSSW

Update the master of the private repository

Use the command scram build code-format to check the code (otherwise the cms robot will be mad)

From the command line, create a new branch

  • git checkout -b borzari-change

Add to the commit the files that were changed (repeat the command below with all the files)

  • git add name_of_file

Commit the changes

  • git commit -m "Few words description of the commit"

Push the changes to the private repository

  • git push my-cmssw borzari-change

Make a pull request

  • In the private repository there is the new branch with the modifications. Do a pull request of that branch to the master branch of the cmssw repository. The message in the pull request should contain a more detailed description of what the modifications are, what are the changes expected and what was the validation performed in the modified code.

Follow the updates of the PR in the main cmssw repository.

MadGraph5 simulation

Using MG5 interface

Using MG5_aMC_v3_1_1 as reference, but it should work with other versions (with some eventual changes in the python version being used to compile)

A general (and incredibly useful) comment before start:

  • The commands help ... and display ... can be very helpful at many times. The ... in help refers to other standard commands in MG5 (such as display, generate, launch, etc.), and in display it can be things like diagrams, modellist, etc. The options can be accessed through pressing the Tab key twice.

SM example and introduction (LO case)

Access the framework with ./bin/mg5_aMC. The default initialization properties are shown at first (if it is up-to-date, what model is under consideration, what are the multiparticles already defined, etc.). There, when using the syntax generate p p > ... it is already possible to perform the calculation of several SM processes. Using as an example the inclusive production of a muon pair (in a pp collider), the correct syntax is generate p p > mu+ mu-. An interesting feature at this point is to look at the feynman diagrams for the process available with the command display diagrams.

To actually perform the calculations, use the launch command. It will create a folder named PROC_%name_of_model%_%number% where %number% will start at 0 and shows how many times a process in the model %name_of_model% was simulated without giving a specific folder name to be created. In this example case (if it is the first time running something for the SM) it will be PROC_sm_0.

MG5 will then ask if any other program will need to run through some switches that can be left OFF or turned on (programs suitable for shower/hadronization (such as Pythia8); detector simulation (such as Delphes); events analysis (never used this); particles decay (such as MadSpin; a lot of particles decay can be implemented in the generate p p > ... command itself, but MadSpin is very useful in some cases that exclude this possibility); event reweighting (never used this as well)). All these programs need to be installed inside MG5 using the following command install ... (pressing the Tab key twice should show the available softwares; also, using help install shows some useful information). There is a 60s timer that can be stopped by pressing the Tab key. In this example just press Enter to not use anything else.

Tip: use install pythia8 --pythia8_tarball=/PATH/TO/SOME/PYTHIA/TARBALL to use a given version of Pythia 8 you have already downloaded, instead of letting it download whatever version from the Pythia website.

After choosing the auxiliary programs, MG5 will give a list of the cards to be used ans asks if any editing is needed. The standard (minimum) cards that MG5 need to perform the calculations are the param_card, that contains all the useful information about the parameters to be set in the model under consideration (particles masses and decay widths, and couplings essentially), and the run_card, that contains information about the simulation parameters (number of events, properties of the initial particles that are interacting, generation restrictions to be set, etc.). More cards, such as the pythia or delphes parameters, are added if needed from the auxiliary programs choosed. In this example case, only the param_card and the run_card are available and can be changed through switches that will open the vi editor, or by giving the path to some other card that is available (more on this later). Just press Enter to not edit anything.

After that MG5 will perform the calculations. In our case, of the inclusive muon pair production, the estimated cross section is 839.1 +- 2.589 pb. Notice that it only considered the photon and the Z boson as the intermediate bosons (the Higgs is not considered due to lack of data maybe?). To change that, using the example where only the photon needs to be considered, one could use one of the following commands:

  • generate p p > mu+ mu- / z (exclude every diagram that has a Z boson appearing)
  • generate p p > mu+ mu- $ z (exclude only the diagrams with a Z boson in the s-channel)
  • generate p p > a > mu+ mu- (only generate the process where the photon is in the s-channel)
  • generate p p > a, a > mu+ mu- (would generate any other channels, if there were any, where the produced photon will still decay into two muons)

The last syntax above is very useful for imposing the decay of particles. A decay chain can also be simulated with the following (using a t tbar example):

  • generate p p > t t~, (t > W+ b, W+ > j j), (t~ > W- b~, W- > l- vl~)

Another useful command is add process p p > .... It provides the possibility to simulate several distinct processes at once, such as generate p p > t t~ %press Enter%; add process p p > t t~ j, in which the process with an extra hard jet is added. Notice that in this case (adding this to this summary since this is one of the most useful applications for add process) some jet matching/merging algorithm should be added. MG5 has the relevant properties for the algorithm in the run_card, but it should always be used in association with Pythia8, that will effectively apply the algorithm and provide the events with weights that separate the "real" ones from the double-counted ones. More useful information in https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/IntroMatching, https://cp3.irmp.ucl.ac.be/projects/madgraph/wiki/Matching and https://pythia.org/latest-manual/JetMatching.html (so far I've only used the MLM scheme).

BSM model (AMSB as example; need param_cards from https://github.com/borzari/DisappTrks/tree/master/SignalMC/data/SLHA_withDecay; LO only)

Start by downloading any of the cards in link above into the MG5 folder in .dat format (I am using the 100cm, AMSB_chargino 700 GeV one AMSB_chargino_700GeV_100cm_Isajet780.dat). Execute the MG5 framework in the same way as in the SM case. The different step now, is that we need to import the relevant model for the calculations: import model MSSM_UFO. A new definition to the multiparticle all is created/printed, with all the particle content of the imported model. One of our use cases is the inclusive production of a pair of charginos, that are represented by x1+ and x1-, or, in the generate syntax, generate p p > x1+ x1-. Also take a moment to look the diagrams of this process. Let us also add the process with an extra jet, that is, add process p p > x1+ x1- j and learn more about the jet merging scheme.

To create an exclusive folder to store all the information of this event, use the command output %name_of_folder%, such as output example_AMSB_chargino_700GeV_100cm. Then, type launch. Now, for the purpose of jet matching, Pythia8 is needed as the hadronizer, since it is at this step that the merging algorithm will be executed. Notice that in MG5 LHE file, that is its standard output, no relevant information is stored concerning the events merging, since it is created before the application of the algorithm.

After that, it is possible to see (in this version at least; older versions always have a lot of distinct options in the run_card) that a lot of new parameters are available in the run_card, that are related to the jet matching procedure. There are few standards to those parameters of the algorithm, that are available in the links above, so just leave it the way it is or check some other reference. Notice now, that in the param_card there are also a lot of new parameters, due to using other model. Also, we have a third card that has valuable information for the hadronization process with Pythia8. In this step we can add the new param_card by providing its path to MG5 as /PATH_TO_FILE/AMSB_chargino_700GeV_100cm_Isajet780.dat. If you look into the param_card now, it will be exactly what was downloaded before.

One problem that arises is that there are some information missing in this param_card. To solve the issue, the masses of some SM particles need to be added in the mass block in the following form

    1. 5.040000e-03 # md
    2. 2.550000e-03 # mu
    3. 1.010000e-01 # ms
    4. 1.270000e+00 # mc
    5. 4.200000e+00 # mb
    6. 1.730700e+02 # mt
    7. 5.110000e-04 # me
    8. 1.056600e-01 # mmu
    9. 1.777000e+00 # mta
    10. 9.117000e+01 # mz

Another message about parameters missing (particles decays, this time) will appear, but MG5 fill the blank spots and can calculate the process. In this case, the calculated cross section 0.006318 +- 1.381e-05 pb is very different from the cross section after the merging procedure 0.0033247 +/- 5.7e-06 [pb] (usually the one from the smallest merging scale is used). This happens due to the events that were vetoed because of the double-counting when accounting for events with one hard jet and events with ISR.

Summary

To summarize everything, this is the step-by-step to perform most of LO calculations in MG5:

  • Access the program: ./bin/mg5_aMC
  • Import your model: import model ... (if SM, this is not needed)
  • Generate process: generate p p > ... (pp for hadron colliders; ee is suported, p pbar is supported, e gamma and p gamma is also supported -> all of these need to be changed in the run card)
  • Add processes: add process p p > ... (if needed)
  • Create folder: output %name_of_folder%
  • Launch process: launch
  • Turn on or off auxiliary programs depending on need
  • Check/modify the relevant cards
  • Wait until the calculation is performed
  • In an event matching case, a hadronizer (mainly Pythia8) is needed, and the correct cross section is the matched one

More useful information can be found in these tutorials https://www.niu.edu/spmartin/madgraph/madtutor.html, https://www.niu.edu/spmartin/madgraph/madsyntax.html and https://www.niu.edu/spmartin/madgraph/madhiggs.html, in the MadGraph websites https://launchpad.net/mg5amcnlo and http://madgraph.phys.ucl.ac.be/, and in its paper https://arxiv.org/abs/1405.0301.

Using external cards for MG5

If one needs to run several calculations on MG5 with very few modification in the cards required by the process, it is possible to pass the relevant cards to MG5 without the need to access the program, generate the process, output the folder, etc. In the attached cards (AMSB_700GeV_x1x1_x1x0_jet_proc_card.dat, AMSB_700GeV_x1x1_x1x0_jet_param_card.dat and AMSB_run_card.dat), I show examples on how to execute the calculations for the AMSB model. The process card has all the information on what model will be used, what process will be generated, if there will be auxiliary programs used and the path to all the necessary cards. Notice that there is an extra line before the definition of the processes here, that is define x1 = x1+ x1-, which is a way to define multiparticles in MG5. Everytime x1 appears in a process, MG5 will account for x1+ and x1-.

As usual, the param_card and the run_card have all the other very useful information for the simulation. To execute the calculation, the following command is used from your MG5 folder ./bin/mg5_aMC %name_of_process_card%, or, in this example, ./bin/mg5_aMC AMSB_700GeV_x1x1_x1x0_jet_proc_card.dat. The result of the merged cross section should be something like 0.010694 +/- 1.8e-05 [pb].

CMSSW simulation (in the DisTrk analysis context)

Tested in cms04 and lxplus

The first step is to setup the correct CMSSW environment. For that, the following commands are needed in the given order:

  • export VO_CMS_SW_DIR=/cvmfs/cms.cern.ch (can be added to .bashrc)
  • source $VO_CMS_SW_DIR/cmsset_default.sh (can be added to .bashrc)
  • export SCRAM_ARCH=slc7_amd64_gcc630 (for 2017 data); export SCRAM_ARCH=slc7_amd64_gcc820 (for 2018 and Run3 data) (can be added to bash, but it depends on release being used)
  • cmsrel CMSSW_9_3_13 (for 2017 data); cmsrel CMSSW_10_2_0 (for 2018 data); cmsrel CMSSW_11_0_2 (for Run3 data)
  • cd CMSSW_VERSION_OF_CHOICE/src
  • cmsenv
  • git cms-init
  • git cms-addpkg SimG4Core/CustomPhysics
  • git clone https://github.com/OSU-CMS/DisappTrks.git
  • cd DisappTrks/SignalMC
  • scram b -j 8 (some WARNINGS will appear, but nothing to worry about)
  • cd test

Step 1 (GEN-SIM or GEN only)

Using Pythia8 as generator and hadronizer

In the test folder it is possible to see some scripts that create the configuration files for the GEN-SIM steps (called step1 in the analysis framework), that are named createStep1s_2017.py, createStep1s_2018.py, createStep1s_Run3.py and createStep1s_Run3Trigger_cfg.py. Each one of them has its requirement regarding the CMSSW version in use. Let's start with the 2017 one (CMSSW_9_3_13). To create the "hadronizers", execute the script with the following command python createStep1s_2017.py. If nothing is changed in the script, there should be created several particle files (ones that contain necessary information about the "new" particles in the model for the correct simulation of the LLPs) in ~/CMSSW_9_3_13/src/DisappTrks/SignalMC/data/geant4 (for the wino-like case) and ~/CMSSW_9_3_13/src/DisappTrks/SignalMC/data/geant4_higgsino (for the higgsino-like case), one for each chargino mass/lifetime point. Also, several configuration files were created in ~/CMSSW_9_3_13/src/DisappTrks/SignalMC/python/pythia8. Those will be used for the generation of the events.

To create the configuration files that will be executed with cmsRun, it is necessary to go back to the SignalMC folder, scram b -j 8 again, go back to the test folder, and run the command python createStep1s_2017.py 2. Now, inside ~/CMSSW_9_3_13/src/DisappTrks/SignalMC/test/step1/pythia8 are all the files created. To generate the events, run the command cmsRun name_of_file_cfg.py and wait until completion. Notice, however, that the default number of events in createStep1s_2017.py is 10. If needed, this number can be changed in lines 153 or 170 in the script, for the wino- and higgsino-like cases, respectively, or directly in each of the configuration files inside process.maxEvents.

To create GEN only configuration files, it is necessary to ctrl+c and ctrl+v the cmsDriver.py command line options (at the very beggining of the file) that were used for a given GEN-SIM configuration file. Using as example the file AMSB_chargino_M700GeV_CTau100cm_step1.py, that has the following command line options for cmsDriver.py DisappTrks/SignalMC/python/pythia8/AMSB_chargino_M700GeV_CTau100cm_TuneCP5_13TeV_pythia8_cff.py --fileout file:AMSB_chargino700GeV_ctau100cm_step1.root --mc --eventcontent RAWSIM --customise Configuration/DataProcessing/Utils.addMonitoring,SimG4Core/CustomPhysics/Exotica_HSCP_SIM_cfi,SimG4Core/Application/customiseSequentialSim.customiseSequentialSim --datatier GEN-SIM --conditions 93X_mc2017_realistic_v3 --beamspot Realistic25ns13TeVEarly2017Collision --step GEN,SIM --geometry DB:Extended --era Run2_2017 --python_filename step1/pythia8/AMSB_chargino_M700GeV_CTau100cm_step1.py --no_exec -n 10, copy these options and, from the test folder type the command cmsDriver.py %paste_options_here% but don't hit enter for now. There are four modifications that are needed: first, the --eventcontent option need to be changed to AODSIM; second, remove any of the customizations related to SimG4Core; third, remove the SIM step from --step; and fourth, add the option --nThreads 8 (the SimG4Core customizations don't allow multithreading). The final options should look like DisappTrks/SignalMC/python/pythia8/AMSB_chargino_M700GeV_CTau100cm_TuneCP5_13TeV_pythia8_cff.py --fileout file:AMSB_chargino_M700GeV_ctau100cm_step1.root --mc --eventcontent AODSIM --customise Configuration/DataProcessing/Utils.addMonitoring --datatier GEN-SIM --conditions 93X_mc2017_realistic_v3 --beamspot Realistic25ns13TeVEarly2017Collision --step GEN --nThreads 8 --geometry DB:Extended --era Run2_2017 --python_filename step1/pythia8/AMSB_chargino_M700GeV_ctau100cm_step1.py --no_exec -n 50000 (notice the higher nunmber of events to generate, since it is much faster to do GEN only).

Using MG5 as generator and Pythia8 as hadronizer

So far, several modifications have to be performed on the already existing Pythia8 files for the generation with MadGraph. I intend to change that in the near future by using the gridpacks and creating all the scripts to write the necessary parameters files that are used.

The first step to perform the generation, since in this case the events are based on an LHE file, is to generate some events using MG5 in the way decribed at the section Using external cards for MG5, and moving the LHE file to the relevant area in cms04 or lxplus, preferably inside the folder ~/CMSSW_9_3_13/src/DisappTrks/SignalMC/test/step1/pythia8/. The param_card for the LHE file generation is also necessary, since a new "hadronizer" file will be needed and it uses this information. To create the new input configuration file, create a copy of the file for Pythia8 DisappTrks/SignalMC/python/pythia8/AMSB_chargino_M700GeV_CTau100cm_TuneCP5_13TeV_pythia8_cff.py and give it a suggestive name, like MG5_AMSB_chargino_M700GeV_CTau100cm_TuneCP5_13TeV_pythia8_cff.py. Three modifications are needed: first, substitute the SLHA table that is already there with the content of the param_card of the generated events in MG5; second, change the module that will execute Pythia8 to "Pythia8HadronizerFilter"; and third, add the following lines to the processParameters block

'JetMatching:setMad = off',

'JetMatching:scheme = 1',

'JetMatching:merge = on',

'JetMatching:jetAlgorithm = 2',

'JetMatching:etaJetMax = 5.',

'JetMatching:coneRadius = 1.',

'JetMatching:slowJetPower = 1',

'JetMatching:qCut = 30.', #this is the actual merging scale

'JetMatching:nQmatch = 5', #4 corresponds to 4-flavour scheme (no matching of b-quarks), 5 for 5-flavour scheme

'JetMatching:nJetMax = 1', #number of partons in born matrix element for highest multiplicity

'JetMatching:doShowerKt = off', #off for MLM matching, turn on for shower-kT matching

Some important notes on the lines above: they ensure that the only events that will "survive" the generation step are the ones that are not double-counted due to the jet matching procedure that is performed by Pythia8; the values of those parameters are under review and might change in the future to guarantee the correct simulation of the events.

The last step is then to create the configuration files to execute with cmsRun. Using as example the command line options from the Pythia8 files, one can execute the following commands from the test folder (notice the addition of the LHE files as input files) to create GEN-SIM and GEN only configuration files, respectively:

  • cmsDriver.py DisappTrks/SignalMC/python/pythia8/MG5_AMSB_chargino_M700GeV_CTau100cm_TuneCP5_13TeV_pythia8_cff.py --filein file:NAME_OF_LHE_FILE.lhe --fileout file:AMSB_chargino700GeV_ctau100cm_step1.root --mc --eventcontent RAWSIM --customise Configuration/DataProcessing/Utils.addMonitoring,SimG4Core/CustomPhysics/Exotica_HSCP_SIM_cfi,SimG4Core/Application/customiseSequentialSim.customiseSequentialSim --datatier GEN-SIM --conditions 93X_mc2017_realistic_v3 --beamspot Realistic25ns13TeVEarly2017Collision --step GEN,SIM --geometry DB:Extended --era Run2_2017 --python_filename step1/pythia8/AMSB_chargino_M700GeV_CTau100cm_step1.py --no_exec -n 1000

  • cmsDriver.py DisappTrks/SignalMC/python/pythia8/MG5_AMSB_chargino_M700GeV_CTau100cm_TuneCP5_13TeV_pythia8_cff.py --filein file:NAME_OF_LHE_FILE.lhe --fileout file:AMSB_chargino_M700GeV_ctau100cm_step1.root --mc --eventcontent AODSIM --customise Configuration/DataProcessing/Utils.addMonitoring --datatier GEN-SIM --conditions 93X_mc2017_realistic_v3 --beamspot Realistic25ns13TeVEarly2017Collision --step GEN --nThreads 8 --geometry DB:Extended --era Run2_2017 --python_filename step1/pythia8/AMSB_chargino_M700GeV_ctau100cm_step1.py --no_exec -n 50000

Step 2 (DIGI; work for both Pythia8 and MG5)

For 2017 only, CMSSW release 9_4_12 is needed. Inside the folder ~/CMSSW_9_3_13/src/DisappTrks/SignalMC/test/step2/ there are examples of configuration files for all distinct data. The 2017 example command line options for cmsDriver.py (that should be executed in the test folder) are step1 --filein file:AMSB_chargino700GeV_ctau100cm_step1.root --fileout file:AMSB_chargino700GeV_ctau100cm_step2.root --pileup_input dbs:/Neutrino_E-10_gun/RunIISummer17PrePremix-PURun3Winter20_110X_mcRun3_2021_realistic_v6-v2/PREMIX --mc --eventcontent PREMIXRAW --datatier GEN-SIM-RAW --conditions 110X_mcRun3_2021_realistic_v6 --step DIGI,DATAMIX,L1,DIGI2RAW,HLT:GRun --procModifiers premix_stage2 --nThreads 8 --geometry DB:Extended --datamix PreMix --era Run3 --python_filename step2/AMSB_chargino_step2_Run3_cfg.py --no_exec --customise Configuration/DataProcessing/Utils.addMonitoring -n 10. The configuration file will be created inside step2 and can be executed with cmsRun name_of_file_cfg.py.

Notice, however, that these options are including PU simulation into the events, which is not possible in cms04. To remove the PU addition, just completely remove the options --pileup_input, --procModifiers, and --datamix, remove the step DATAMIX from --step, and chnge the --eventcontent to RAWSIM. The number of events -n can be set to -1 to process every event from the input file.

Step 3 (RECO; work for both Pythia8 and MG5)

In step 3 we have the same recipe as step 2. Access the folder ~/CMSSW_9_3_13/src/DisappTrks/SignalMC/test/step3/ and use the cmsDriver.py command line options from the relevant file (that should be executed in the test folder). The 2017 example is step2 --filein file:AMSB_chargino700GeV_ctau100cm_step2.root --fileout file:AMSB_chargino700GeV_ctau100cm_step3.root --mc --eventcontent AODSIM --runUnscheduled --datatier AODSIM --conditions 94X_mc2017_realistic_v11 --step RAW2DIGI,RECO,RECOSIM,EI --nThreads 4 --era Run2_2017 --python_filename step3/AMSB_chargino_step3_2017_cfg.py --no_exec --customise Configuration/DataProcessing/Utils.addMonitoring -n 10. The number of events -n can be set to -1 to process every event from the input file. The configuration file will be created inside step3 and can be executed with cmsRun name_of_file_cfg.py.

Generation of CutFlows in OSUT3 framework for Run2 data

Firstly, to setup the environment for the follwoing steps, it is necessary to go to a folder inside DisappTrks (StandardAnalysis will be used as an example) and use the command makeCondorLinks.sh to create the symbolic links to the condor folders.

localConfig

The first step is to execute the file DisappTrks/StandardAnalysis/test/localConfig_2022.py from within DisappTrks/StandardAnalysis/test using the command osusub.py -l localConfig_2022.py -t OSUT3Ntuple -w CONDOR_DIR -R "Memory > 1024" where CONDOR_DIR is the target directory in /data/users/borzari, and -t defines the type of file to be used as input; in this example it is an OSUT3Ntuple. The file localConfig_2022.py imports the file DisappTrks/StandardAnalysis/python/localConfig.py and should use some info from file config_2022_cfg.py, but it is overwritten by DisappTrks/StandardAnalysis/python/config_cfg.py (not 100% sure about this, but will test). DisappTrks/StandardAnalysis/python/localConfig.py imports dataset samples from DisappTrks/StandardAnalysis/python/miniAOD_124X_Samples.py, which contains the dataset names for different samples, or at least should, since it doesn't seem to do anything. DisappTrks/StandardAnalysis/python/localConfig.py also defines some dataset names lists which are named as datasetsBkgd, datasetsBkgdForMET, datasetsSig, datasetsSigHiggsino, etc., and defines samples cross sections where both are used at I don't know yet. A last resource used in DisappTrks/StandardAnalysis/python/localConfig.py is that it is possible to "customize" the input datasets, such as using the addLifetimeReweighting, defined in DisappTrks/StandardAnalysis/python/utilities.py. The important dataset is defined inside DisappTrks/StandardAnalysis/python/localConfig.py. What is in DisappTrks/StandardAnalysis/test/localConfig_2022.py does probably nothing; need to check if some info is passed to OSUT3Analysis/DBTools/scripts/osusub.py. And DisappTrks/StandardAnalysis/python/localConfig.py indeed uses the dictionary defined inside DisappTrks/StandardAnalysis/python/miniAOD_124X_Samples.py to get datasets.

The DisappTrks/StandardAnalysis/test/config_2022_cfg.py adds some customization to the CMSSW process, such as !applyPUReweighting, !applyISRReweighting, !applyTriggerReweighting, !applyMissingHitsCorrections, !runMETFilters and !runEcalBadCalibFilters, where the customize function is defined in DisappTrks/StandardAnalysis/python/customize.py. It essentially uses the runPeriod (2022 for this example) and some booleans of the same name as above to configure the modules that perform the customization. It also imports file DisappTrks/StandardAnalysis/python/config_cfg.py which add channels for application of the selections (maybe here is where the standard analysis is applied; need to understand better). DisappTrks/StandardAnalysis/python/config_cfg.py imports DisappTrks/StandardAnalysis/python/protoConfig_cfg.py which is the actual CMSSW config file to be "executed" with cmsRun (osusub.py should do it automatically).

Actually, using DisappTrks/StandardAnalysis/test/localConfig_2022.py with command osusub.py -l localConfig_2022.py -t UserDir -w CONDOR_DIR --inputDirectory=INPUT_DIR seems the correct approach. Also, inside DisappTrks/StandardAnalysis/python/protoConfig_cfg.py there was an error with the mc GlobalTag being defined in line 22. It was defined as fixme, but it should be changed to auto:phase1_2021_realistic. Also, the channel in the DisappTrks/StandardAnalysis/python/config_cfg.py was set to line 55, that does the disTrkSelection.

Minimal changes to make localConfig "work" with MiniAOD

  • Change DisappTrks/StandardAnalysis/python/config_cfg.py to use disTrk selection
  • Add signal dataset name in DisappTrks/StandardAnalysis/python/localConfig.py, remove unnecessary loops in lines ~200 and comment addLifetimeReweighting (datasetsSig)
  • Add sample name with a placeholder (since will be using -t UserDir this works) in DisappTrks/StandardAnalysis/python/miniAOD_124X_Samples.py
  • Change 12_4_X mc global tag to the relevant one for a given 12_4_X release in DisappTrks/StandardAnalysis/python/protoConfig_cfg.py
  • Change datasets in DisappTrks/StandardAnalysis/test/localConfig_2022.py
  • Inside the file OSUT3Analysis/Configuration/python/configurationOptions.py the same keys to the dataset should be added. The information is: key, number of jobs, number of events, type (BG, sig, MC, etc.), colors (for plots), style (of line, probably) and cross section (in fb, probably)
  • In file OSUT3Analysis/Configuration/python/CollectionProducer_cff.py around line 300, uncomment candidateTracks for the track producer to use it

Cuts

triggers

cutMetFilters, cutMetFilters = cms.PSet( inputCollection = cms.vstring("mets"), cutString = cms.string("badPFMuonFilter && badChargedCandidateFilter && passecalBadCalibFilterUpdate"), numberRequired = cms.string(">= 1"), )

cutGoodPV, cutGoodPV = cms.PSet ( inputCollection = cms.vstring("primaryvertexs"), cutString = cms.string("isValid > 0 && ndof >= 4 && fabs (z) < 24.0 && sqrt (x * x + y * y) < 2.0"), numberRequired = cms.string(">= 1"), alias = cms.string(">= 1 good primary vertices"), )

cutMet cutMet = cms.PSet( inputCollection = cms.vstring("mets"), cutString = cms.string("noMuPt > 120"), numberRequired = cms.string(">= 1"), )

cutJetPt, cutJetPt = cms.PSet( inputCollection = cms.vstring("jets"), cutString = cms.string("pt > 110"), numberRequired = cms.string(">= 1"), )

cutJetEta, cutJetEta = cms.PSet( inputCollection = cms.vstring("jets"), cutString = cms.string("fabs ( eta ) < 2.4"), numberRequired = cms.string(">= 1"), )

cutJetTightLepVeto, cutJetTightLepVeto = cms.PSet( inputCollection = cms.vstring("jets"), cutString = cms.string(" (( neutralHadronEnergyFraction<0.90 && neutralEmEnergyFraction<0.90 && (chargedMultiplicity + neutralMultiplicity)>1 && muonEnergyFraction<0.8) && ((abs(eta)<=2.4 && chargedHadronEnergyFraction>0 && chargedMultiplicity>0 && chargedEmEnergyFraction<0.90) || abs(eta)>2.4) && abs(eta)<=3.0) || (neutralEmEnergyFraction<0.90 && neutralMultiplicity>10 && abs(eta)>3.0)"), numberRequired = cms.string(">= 1"), alias = cms.string(">= 1 jet passing TightLepVeto ID"), )

cutDijetDeltaPhiMax, cutDijetDeltaPhiMax = cms.PSet( inputCollection = cms.vstring("eventvariables"), cutString = cms.string("dijetMaxDeltaPhi < 2.5"), numberRequired = cms.string(">= 1"), alias = cms.string("veto pairs of jets with #Delta#phi > 2.5"), )

cutLeadingJetMetPhi cutLeadingJetMetPhi = cms.PSet( inputCollection = cms.vstring("eventvariables", "mets"), cutString = cms.string("fabs( dPhi (met.noMuPhi, eventvariable.phiJetLeading) ) > 0.5"), numberRequired = cms.string(">= 1"), alias = cms.string("#Delta#phi(E_{T}^{miss},jet) > 0.5"), )

cutTrkEta, cutTrkEta = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("fabs ( eta ) < 2.1"), numberRequired = cms.string(">= 1"), )

cutTrkPt55 cutTrkPt55 = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("pt > 55"), numberRequired = cms.string(">= 1"), )

cutTrkEcalGapVeto, cutTrkEcalGapVeto = cms.PSet( # BARREL-ENDCAP GAP VETO inputCollection = cms.vstring("tracks"), cutString = cms.string("fabs ( eta ) < 1.42 || fabs ( eta ) > 1.65"), numberRequired = cms.string(">= 1"), )

cutTrkEtaMuonIneff1, cutTrkEtaMuonIneff1 = cms.PSet( # TRACK ETA: MUON INEFFICIENCY REGION 1 inputCollection = cms.vstring("tracks"), cutString = cms.string("fabs ( eta ) < 0.15 || fabs ( eta ) > 0.35"), numberRequired = cms.string(">= 1"), )

cutTrkEtaMuonIneff2, cutTrkEtaMuonIneff2 = cms.PSet( # TRACK ETA: MUON INEFFICIENCY REGION 2 inputCollection = cms.vstring("tracks"), cutString = cms.string("fabs ( eta ) < 1.55 || fabs ( eta ) > 1.85"), numberRequired = cms.string(">= 1"), )

cutTrkTOBCrack, cutTrkTOBCrack = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("!inTOBCrack"), numberRequired = cms.string(">= 1"), )

cutTrkFiducialElectron, cutTrkFiducialElectron = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("isFiducialElectronTrack"), numberRequired = cms.string(">= 1"), )

cutTrkFiducialMuon, cutTrkFiducialMuon = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("isFiducialMuonTrack"), numberRequired = cms.string(">= 1"), )

cutTrkFiducialECAL, cutTrkFiducialECAL = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("isFiducialECALTrack"), numberRequired = cms.string(">= 1"), )

cutTrkNValidPixelHitsSignal, cutTrkNValidPixelHits = { x : cms.PSet ( inputCollection = cms.vstring("tracks"), cutString = cms.string("hitPattern_.numberOfValidPixelHits >= " + str(x)), numberRequired = cms.string(">= 1"), ) for x in range(1, 10) }

cutTrkNValidPixelHitsSignal = cutTrkNValidPixelHits[4]

cutTrkNValidHitsSignal,

cutTrkNValidHits = { x : cms.PSet ( inputCollection = cms.vstring("tracks"), cutString = cms.string("hitPattern_.numberOfValidHits >= " + str(x)), numberRequired = cms.string(">= 1"), ) for x in range(1, 10) }

cutTrkNValidHitsSignal = cutTrkNValidHits[7] cutTrkNValidHitsVariations = {"NHits" + str(x) : cutTrkNValidHitsExclusive[x] for x in range(3, 7)} if os.environ["CMSSW_VERSION"].startswith ("CMSSW_9_4_") or os.environ["CMSSW_VERSION"].startswith ("CMSSW_10_2_") or os.environ["CMSSW_VERSION"].startswith ("CMSSW_12_4_"):

cutTrkNValidHitsSignal = cutTrkNValidHits[4] cutTrkNValidHitsVariations.update({"NHits7plus" : cutTrkNValidHits[7]}) print("# Hits requirement: 4/4") else: print("# Hits requirement: 3/7")

cutTrkNMissIn, cutTrkNMissIn = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("missingInnerHits == 0"), numberRequired = cms.string(">= 1"), )

cutTrkNMissMid, cutTrkNMissMid = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("hitDrop_missingMiddleHits == 0"), numberRequired = cms.string(">= 1"), )

cutTrkIso, cutTrkIso = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string(" ( trackIsoNoPUDRp3 / pt ) < 0.05"), numberRequired = cms.string(">= 1"), )

cutTrkD0, cutTrkD0 = cms.PSet( inputCollection = cms.vstring("tracks", "eventvariables"), cutString = cms.string("fabs ( " + trackD0WRTPV + " ) < 0.02"), numberRequired = cms.string(">= 1"), alias = cms.string(">= 1 tracks with |d0| < 0.02"), )

cutTrkDZ, cutTrkDZ = cms.PSet( inputCollection = cms.vstring("tracks", "eventvariables"), cutString = cms.string("fabs ( " + trackDZWRTPV + " ) < 0.5"), numberRequired = cms.string(">= 1"), alias = cms.string(">= 1 tracks with |dz| < 0.5"), )

cutTrkJetDeltaPhi, cutTrkJetDeltaPhi = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("dRMinJet > 0.5"), numberRequired = cms.string(">= 1"), )

cutTrkElecVeto, cutTrkElecVeto = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("deltaRToClosestElectron > 0.15"), numberRequired = cms.string(">= 1"), )

cutTrkMuonVeto, cutTrkMuonVeto = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("deltaRToClosestMuon > 0.15"), numberRequired = cms.string(">= 1"), )

cutTrkTauHadVeto, cutTrkTauHadVeto = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("deltaRToClosestTauHad > 0.15"), numberRequired = cms.string(">= 1"), )

cutTrkEcalo, cutTrkEcalo = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("caloNewNoPUDRp5CentralCalo < 10"), numberRequired = cms.string(">= 1"), )

cutTrkNMissOut, cutTrkNMissOut = cms.PSet( inputCollection = cms.vstring("tracks"), cutString = cms.string("hitAndTOBDrop_bestTrackMissingOuterHits >= 3"), numberRequired = cms.string(">= 1"), )

Generation of CutFlows in OSUT3 framework for Run3 BG

There are a few steps to do it. For each dataset there should be a request to copy the AOD files from tape to SPRACE T2, the creation of what we are now calling extended MINIAOD which is the MINIAOD datasets with the extra calorimeter rechits for the evnts that pass the skim cuts and then running the BasicSelection over those files.

Replicating files from tape

To replicate (copy) the files from tape to T2_SPRACE, Rucio is used. The twiki on how to use Rucio from command line, which is easier, is https://twiki.cern.ch/twiki/bin/view/CMSPublic/RucioUserDocsRules . To setup Rucio in lxplus just make a script with the following lines:

source $CMS_PATH/rucio/setup-py3.sh export RUCIO_ACCOUNT=YOUR_LXPLUS_USERNAME

The most useful commands are:

  • rucio list-rules --account $RUCIO_ACCOUNT
  • rucio add-rule --ask-approval cms:/DATASET/EXAMPLE/AODSIM 1 T2_BR_SPRACE --lifetime 2592000
  • rucio delete-rule [RULE_HASH]

Skimming and saving MINIAOD collection

The skim cuts are essentially the analysis triggers selection and events with tracks with pT > 25 GeV and |eta| < 2.1. After the events selection, the PAT step, that creates the MINIAOD collections, is executed again and both the MINIAOD collections and the calorimeter rechits collections are saved in the same dataset. The config that does all of this is in DisappTrks/Skims/test/test_EXODisappTrk_MC2022_PAT.py and the crab config file to submit the jobs to crab is DisappTrks/Skims/test/test_crab.py.

Running the StandardAnalysis

This step is very similar to the rest. Firstly, the key to the dataset has to be created. For that, file DisappTrks/StandardAnalysis/python/miniAOD_124X_Samples.py should be changed to include the dataset key/location to dataset_names_bkgd; file DisappTrks/StandardAnalysis/test/localConfig_2022.py should be changed to have the correct key in the datasetsBG list; and file OSUT3Analysis/Configuration/python/configurationOptions.py should include the keys and the 7 information mentioned in section *Minimal changes to make localConfig "work" with MiniAOD* above. After all those changes, file DisappTrks/StandardAnalysis/python/config_cfg.py must be changed to make the BasicSelection and everything is run with (using WW_PostEE as an example):

/home/borzari/scratch0/CMSSW_12_4_11_patch3/src/OSUT3Analysis/DBTools/scripts/osusub.py -l localConfig_2022.py -t Dataset -w BGMC/Run3/2022/Cutflows_WW_postEE -R request_cpus=2 -A -P

Invalid files on the store

Some datasets produced with CRAB had their status changed to INVALID. To correct the issue, go to https://cms-talk.web.cern.ch/t/dbs-phys03-files-wrongly-invalidated/31152/2

Useful Condor commands

  • condor_q --allusers: list everyone's jobs
  • condor_q -l JOB.ID: list informations of given JOB.ID; an useful information is in RemoteHost
  • condor_q --run --nobatch: display the machine and run time of each running job
  • condor_tail -f JOB.ID: display on the screen the last lines of stdout; will keep displaying until interrupted by Ctrl+C
  • condor_tail -stderr -no-stdout -f JOB.ID: display on the screen the last lines of condor.err; will keep displaying until interrupted by Ctrl+C
  • condor_hold JOB.ID: set job to HOLD status; if used only with JOB, all JOB.IDs will be held
  • condor_release JOB.ID: release held jobs
  • condor_rm JOB.ID: kill JOB.ID and remove it from submission queue
  • condor_submit condor.sub: submit condor.sub

More info in https://twiki.cern.ch/twiki/bin/view/CENF/NeutrinoClusterCondorDoc

Useful commands for files at SPRACE T2

  • gfal-ls -l root://osg-se.sprace.org.br:1094//store/user/borzari : list all files in /store/user/borzari
  • gfal-rm root://osg-se.sprace.org.br:1094//store/user/borzari/file.001 : deletes file.001. It can also be used with option -r for recursive deletion

Useful CRAB links

How to calculate luminosity

Follow the guidelines in https://twiki.cern.ch/twiki/bin/viewauth/CMS/BrilcalcQuickStart#Getting_started_without_requirin

How to calculate luminosity from crab tasks

  • Get JSON files running crab report -d path_to_dir
  • Calculate luminosity with brilcalc lumi --normtag /cvmfs/cms-bril.cern.ch/cms-lumi-pog/Normtags/normtag_BRIL.json -u /fb -i [your json]
    • Notice that the normtag can be different, depending on the case; for Run 3 the usual choice is the BRIL normtag

Comments

-- trtomei - 2021-11-24

Topic attachments
I Attachment History Action Size Date Who Comment
PNGpng ALL_backgrounds_mJ-1.png r1 manage 67.7 K 2019-04-08 - 17:49 UnknownUser  
Unknown file formatdat AMSB_700GeV_x1x1_x1x0_jet_param_card.dat r1 manage 7.4 K 2021-11-24 - 09:37 UnknownUser  
Unknown file formatdat AMSB_700GeV_x1x1_x1x0_jet_proc_card.dat r1 manage 0.3 K 2021-11-24 - 09:37 UnknownUser  
Unknown file formatdat AMSB_run_card.dat r1 manage 9.5 K 2021-11-24 - 09:37 UnknownUser  
PNGpng BG_certo.png r1 manage 34.0 K 2019-07-02 - 19:27 UnknownUser  
PNGpng bg.png r1 manage 53.3 K 2019-05-08 - 12:37 UnknownUser  
PNGpng bg_certo_comp.png r1 manage 48.7 K 2019-07-03 - 15:22 UnknownUser  
PNGpng brazilian_mdh.png r1 manage 56.4 K 2019-10-04 - 09:36 UnknownUser  
PNGpng brazilian_mzp.png r1 manage 61.8 K 2019-10-04 - 09:36 UnknownUser  
PNGpng cls_mdh.png r1 manage 25.4 K 2019-08-29 - 22:07 UnknownUser  
PNGpng cls_mdh_com_intermediarios.png r1 manage 13.4 K 2019-09-12 - 02:22 UnknownUser  
PNGpng cls_mzp.png r1 manage 24.8 K 2019-08-29 - 22:07 UnknownUser  
PNGpng cls_mzp_com_intermediarios.png r1 manage 8.2 K 2019-09-12 - 02:22 UnknownUser  
PNGpng comp_Z.png r1 manage 82.4 K 2019-07-01 - 21:42 UnknownUser  
PNGpng comp_bg_NLO.png r1 manage 56.1 K 2019-07-04 - 16:43 UnknownUser  
PNGpng comp_dh.png r1 manage 89.3 K 2019-07-01 - 21:42 UnknownUser  
PNGpng dh_results.png r1 manage 44.8 K 2019-06-06 - 18:55 UnknownUser  
PNGpng mass_comp_ALL.png r1 manage 33.7 K 2019-08-29 - 22:04 UnknownUser  
PNGpng mdh_gchi_excl_comp.png r1 manage 126.1 K 2020-10-01 - 13:29 UnknownUser  
PNGpng mdh_xsec_cl.png r1 manage 48.3 K 2019-09-06 - 18:39 UnknownUser  
PNGpng mzp_mchi_excl_comp.png r1 manage 196.2 K 2020-10-01 - 13:29 UnknownUser  
PNGpng mzp_xsec_cl.png r1 manage 51.5 K 2019-09-06 - 18:39 UnknownUser  
PNGpng xsec_factor_bg.png r1 manage 39.5 K 2019-07-04 - 19:33 UnknownUser  
PNGpng xsec_mdh_155_156.png r1 manage 58.7 K 2019-09-11 - 21:17 UnknownUser  
PNGpng xsec_mzp_2750.png r1 manage 43.2 K 2019-09-12 - 02:06 UnknownUser  
Topic revision: r71 - 2024-10-14 - brenoorzari
 

This site is powered by the TWiki collaboration platform Powered by PerlCopyright © 2008-2024 by the contributing authors. All material on this collaboration platform is the property of the contributing authors.
Ideas, requests, problems regarding TWiki? Send feedback

antalya escort bursa escort eskisehir escort istanbul escort izmir escort