Thank you for answering me, Sezgin (@Sezginer),
According to your recommendations,
I split the training data into training and validation files.
But, when I try to implement the train_net.py in HierachicalDet, I get this message below.
FileNotFoundError: [Errno 2] No such file or directory: 'ibrahim/Diseasedataset_base_enumeration_m_t_inference_train/inference/coco_instances_results.json' from "HierarchicalDet/hierarchialdet/dataset_mapper.py"
Where the json files come from in the dataset_mapper.py?
I think that some coco_instance_results.json files are originated from the Detectron2, but these are related with the general objects or person or natural environmental picture data, not with the panoramic x ray transmission data.
I got a coco_instance_results.json file and instances_predictions.pth after training detectron2 through train_net.py.
If the Train and Val sets are used to acquire a coco_instance_results.json file, should I have to train other data to get the other coco_instance_results.json file? Since the data I used is coco(train2017, val2017) in training detectron2, I am wondering that I should get the other new coco_instance_results.json file by using the other data(lvis).
If I don't have to use the data with the detectron2 to get the coco_instance_results.json files,
should I use the data(quadrant, quadrant_enumeration, quadrant_enumeration-disease, unlabelled) directly in training?
Even if I use the x-ray data in order to get coco_instance_results.json files, I can't train those because I would get this message
"No such file or directory: 'ibrahim/Diseasedataset_base_enumeration_m_t_inference_train/inference/coco_instances_results.json' from "HierarchicalDet/hierarchialdet/dataset_mapper.py""
I would appreciate it if you could give me an idea.
Sincerely,
Gibok
In the dataset_mapper.py
self.img_format = cfg.INPUT.FORMAT
self.is_train = is_train
boxes_train = "ibrahim/Diseasedataset_base_enumeration_m_t_inference_train/inference/coco_instances_results.json"
boxes_valid = "ibrahim/Diseasedataset_base_enumeration_m_t_inference_val/inference/coco_instances_results.json"
self.train_boxes=[]
self.valid_boxes=[]
f_train = open(boxes_train)
dict_train = json.load(f_train)
f_valid= open(boxes_valid)
dict_valid=json.load(f_valid)
for inference in dict_train:
if inference["score"]>=0.5:
self.train_boxes.append(inference)
for inference in dict_valid:
if inference["score"]>=0.5:
self.valid_boxes.append(inference)
+c.f) As a result, I determined to train the training data(quadrant) with the validation data split from training data using DiffusionDet architecture,
When finishing this training, I suppose that I can get coco_instances_results.json file from the output folder.
Then, how can I use this json file for boxes_train and boxes_valid in dataset_mapper.py?