Shinpei Hayashi

[Japanese]

About Me

Shinpei Hayashi is an associate professor at School of Computing, Tokyo Institute of Technology. He received a B.Eng. degree from Hokkaido University in 2004. He also received M.Eng. and Dr.Eng. degrees from Tokyo Institute of Technology in 2006 and 2008, respectively.

See Also

Contact Addresses

Location
West-8E Bldg. #906, Ookayama Campus, Tokyo Institute of Technology
Address
Ookayama 2-12-1-W8-71, Ookayama, Meguro-ku, Tokyo 152-8552, Japan
Phone/Fax.
+81-3-5734-3213 or skype:hayashi.shinpei

Current Interests

Software Engineering, in particular,

P{ublic,resent}ations

To Be Published

  1. Shinpei Hayashi, Teppei Kato, Motoshi Saeki: "Locating Concepts on Use Case Steps in Source Code". IEICE Transactions on Information and Systems, vol. 107-D, no. 5. may, 2024.
  2. Shizuka Tsumita and Sousuke Amasaki and Shinpei Hayashi: "The Impact of Module Granularity in IR-based Bug Localization Techniques" (in Japanese). IPSJ Journal. 2024.
  3. Haruhiko Kaiya, Shinpei Ogata, Shinpei Hayashi: "Evaluating Introduction of Systems by Goal Dependency Modeling". IEICE Transactions on Information and Systems. 2024.
  4. Takashi Kobayashi and Shinpei Hayashi and Shinobu Saito: "Technical Debt: The Current Understanding for the Barrier to Evolve Software" (in Japanese). Computer Software. 2024.

Papers Published in Academic Journals

  1. Shunta Shiba and Shinpei Hayashi: "Historinc: A Repository Transformation Tool for Fine-Grained History Tracking" (in Japanese). Computer Software, vol. 39, no. 4, pp. 75-85. nov, 2022.
    ID
    DOI: 10.11309/jssst.39.4_75
    Abstract
    Background: Tracking program elements in source code is useful for program comprehension, supporting code edit, and so on. Historage, a history tracking approach based on repository transformation, enables developers to use a familiar interface to track a finer-grained history. Problem: Existing repository transformation tools have performance issues: (1) their transformation steps include the expansion and archiving of snapshots from the object database, and (2) they cannot transform repositories incrementally, which are unsuitable when using them for supporting software development activities. Method: In this paper, we describe the design and implementation of a transformation tool, Historinc, that reduces the transformation time. We use git-stein, a repository transformation framework based on the recording of the mapping between objects, to suppress unnecessary expansion and archiving of files. In addition, we store the mapping and use it later to support incremental transformation. Preliminary Evaluation: We compared the transformation time of our tool with an existing one. Furthermore, we compared performance when using different kinds of mappings to be stored. As a result, we found that our tool is more than four times faster than the existing tool and that storing object mapping is effective.
    BibTeX
    @article{shiba-jssst202211,
        author = {Shunta Shiba and Shinpei Hayashi},
        title = {{Historinc}: A Repository Transformation Tool for Fine-Grained History Tracking},
        journal = {Computer Software},
        volume = 39,
        number = 4,
        pages = {75--85},
        year = 2022,
        month = {nov},
    }
    [shiba-jssst202211]: as a page
  2. Yotaro Seki, Shinpei Hayashi, Motoshi Saeki: "Cataloging Bad Smells in Use Case Descriptions and Automating Their Detection". IEICE Transactions on Information and Systems, vol. 105-D, no. 5, pp. 849-863. may, 2022.
    ID
    DOI: 10.1587/transinf.2021KBP0008
    Abstract
    Use case modeling is popular to represent the functionality of the system to be developed, and it consists of two parts: a use case diagram and use case descriptions. Use case descriptions are structured text written in natural language, and the usage of natural language can lead to poor descriptions such as ambiguous, inconsistent and/or incomplete descriptions. Poor descriptions lead to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of the produced use case model. This paper proposes a technique to automate detecting bad smells of use case descriptions, i.e., symptoms of poor descriptions. At first, to clarify bad smells, we analyzed existing use case models to discover poor use case descriptions concretely and developed the list of bad smells, i.e., a catalog of bad smells. Some of the bad smells can be refined into measures using the Goal-Question-Metric paradigm to automate their detection. The main contributions of this paper are the developed catalog of bad smells and the automated detection of these bad smells. We have implemented an automated smell detector for 22 bad smells at first and assessed its usefulness by an experiment. As a result, the first version of our tool got a precision ratio of 0.591 and a recall ratio of 0.981. Through evaluating our catalog and the automated tool, we found additional six bad smells and two metrics. Then, we obtained the precision of 0.596 and the recall of 1.00 by our final version of the automated tool.
    BibTeX
    @article{yotaro-ieicet202205,
        author = {Yotaro Seki and Shinpei Hayashi and Motoshi Saeki},
        title = {Cataloging Bad Smells in Use Case Descriptions and Automating Their Detection},
        journal = {IEICE Transactions on Information and Systems},
        volume = {105-D},
        number = 5,
        pages = {849--863},
        year = 2022,
        month = {may},
    }
    [yotaro-ieicet202205]: as a page
  3. Shinpei Hayashi, Keisuke Asano, Motoshi Saeki: "Automating Bad Smell Detection in Goal Refinement of Goal Models". IEICE Transactions on Information and Systems, vol. 105-D, no. 5, pp. 837-848. may, 2022.
    ID
    DOI: 10.1587/transinf.2021KBP0006
    Abstract
    Goal refinement is a crucial step in goal-oriented requirements analysis to create a goal model of high quality. Poor goal refinement leads to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of produced goal models. This paper proposes a technique to automate detecting bad smells of goal refinement, symptoms of poor goal refinement. At first, to clarify bad smells, we asked subjects to discover poor goal refinement concretely. Based on the classification of the specified poor refinement, we defined four types of bad smells of goal refinement: Low Semantic Relation, Many Siblings, Few Siblings, and Coarse Grained Leaf, and developed two types of measures to detect them: measures on the graph structure of a goal model and semantic similarity of goal descriptions. We have implemented a supporting tool to detect bad smells and assessed its usefulness by an experiment.
    BibTeX
    @article{hayashi-ieicet202205,
        author = {Shinpei Hayashi and Keisuke Asano and Motoshi Saeki},
        title = {Automating Bad Smell Detection in Goal Refinement of Goal Models},
        journal = {IEICE Transactions on Information and Systems},
        volume = {105-D},
        number = 5,
        pages = {837--848},
        year = 2022,
        month = {may},
    }
    [hayashi-ieicet202205]: as a page
  4. Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "Supporting Proactive Refactoring: An Exploratory Study on Decaying Modules and Their Prediction". IEICE Transactions on Information and Systems, vol. E104-D, no. 10, pp. 1601-1615. oct, 2021.
    ID
    DOI: 10.1587/transinf.2020EDP7255
    Abstract
    Code smells can be detected using tools such as a static analyzer that detects code smells based on source code metrics. Developers perform refactoring activities based on the result of such detection tools to improve source code quality. However, such an approach can be considered as reactive refactoring, i.e., developers react to code smells after they occur. This means that developers first suffer the effects of low-quality source code before they start solving code smells. In this study, we focus on proactive refactoring, i.e., refactoring source code before it becomes smelly. This approach would allow developers to maintain source code quality without having to suffer the impact of code smells. To support the proactive refactoring process, we propose a technique to detect decaying modules, which are non-smelly modules that are about to become smelly. We present empirical studies on open source projects with the aim of studying the characteristics of decaying modules. Additionally, to facilitate developers in the refactoring planning process, we perform a study on using a machine learning technique to predict decaying modules and report a factor that contributes most to the performance of the model under consideration.
    BibTeX
    @article{natthawute-ieicet202110,
        author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {Supporting Proactive Refactoring: An Exploratory Study on Decaying Modules and Their Prediction},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E104-D},
        number = 10,
        pages = {1601--1615},
        year = 2021,
        month = {oct},
    }
    [natthawute-ieicet202110]: as a page
  5. Aoi Takahashi, Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "An Extensive Study on Smell-Aware Bug Localization". Journal of Systems and Software, vol. 178, pp. 110986:1-17. aug, 2021.
    ID
    DOI: 10.1016/j.jss.2021.110986
    Abstract
    Bug localization is an important aspect of software maintenance because it can locate modules that should be changed to fix a specific bug. Our previous study showed that the accuracy of the information retrieval (IR)-based bug localization technique improved when used in combination with code smell information. Although this technique showed promise, the study showed limited usefulness because of the small number of: 1) projects in the dataset, 2) types of smell information, and 3) baseline bug localization techniques used for assessment. This paper presents an extension of our previous experiments on Bench4BL, the largest bug localization benchmark dataset available for bug localization. In addition, we generalized the smell-aware bug localization technique to allow different configurations of smell information, which were combined with various bug localization techniques. Our results confirmed that our technique can improve the performance of IR-based bug localization techniques for the class level even when large datasets are processed. Furthermore, because of the optimized configuration of the smell information, our technique can enhance the performance of most state-of-the-art bug localization techniques.
    BibTeX
    @article{takahashi-a-at-jss202108,
        author = {Aoi Takahashi and Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {An Extensive Study on Smell-Aware Bug Localization},
        journal = {Journal of Systems and Software},
        volume = 178,
        pages = {110986:1--17},
        year = 2021,
        month = {aug},
    }
    [takahashi-a-at-jss202108]: as a page
  6. Daisuke Shimbara, Motoshi Saeki, Shinpei Hayashi, Øystein Haugen: "Handling Quantity in Variability Models for System-of-Systems". International Journal of Software Engineering and Knowledge Engineering, vol. 31, no. 5, pp. 693-724. may, 2021.
    ID
    DOI: 10.1142/S0218194021500200
    Abstract
    Problem: Modern systems contain parts that are themselves systems. Such complex systems thus have sets of subsystems that have their own variability. These subsystems contribute to the functionality of a whole system-of-systems (SoS). Such systems have a very high degree of variability. Therefore, a modeling technique for the variability of an entire SoS is required to express two different levels of variability: variability of the SoS as a whole and variability of subsystems. If these levels are described together, the model becomes hard to understand. When the variability model of the SoS is described separately, each variability model is represented by a tree structure and these models are combined in a further tree structure. For each node in a variability model, a quantity is assigned to express the multiplicity of its instances per one instance of its parent node. Quantities of the whole system may refer to the number of subsystem instances in the system. From the viewpoint of the entire system, constraints and requirements written in natural language are often ambiguous regarding the quantities of subsystems. Such ambiguous constraints and requirements may lead to misunderstandings or conflicts in an SoS configuration. Approach: A separate notion is proposed for variability of an SoS; one model considers the SoS as an undivided entity, while the other considers it as a combination of subsystems. Moreover, a domain-specific notation is proposed to express relationships among the variability properties of systems, to solve the ambiguity of quantities and establish the total validity. This notation adapts an approach, named Pincer Movement, which can then be used to automatically deduce the quantities for the constraints and requirements. Validation: The descriptive capability of the proposed notation was validated with four examples of cloud providers. In addition, the proposed method and description tool were validated through a simple experiment on describing variability models with real practitioners.
    BibTeX
    @article{shinbara-ijseke202105,
        author = {Daisuke Shimbara and Motoshi Saeki and Shinpei Hayashi and {\O}ystein Haugen},
        title = {Handling Quantity in Variability Models for System-of-Systems},
        journal = {International Journal of Software Engineering and Knowledge Engineering},
        volume = 31,
        number = 5,
        pages = {693--724},
        year = 2021,
        month = {may},
    }
    [shinbara-ijseke202105]: as a page
  7. Lan Wang, Shinpei Hayashi, Motoshi Saeki: "Applying Class Distance to Decide Similarity on Information Models for Automated Data Interoperability". International Journal of Software Engineering and Knowledge Engineering, vol. 31, no. 3, pp. 405-434. mar, 2021.
    ID
    DOI: 10.1142/S0218194021500145
    Abstract
    In the world of the Internet of Things (IoT), heterogeneous systems and devices need to be connected and exchange data with others. How data exchange can be automatically realized becomes a critical issue. An information model (IM) is frequently adopted and utilized to solve the data interoperability problem. Meanwhile, as IoT systems and devices can have different IMs with different modeling methodologies and formats such as UML, IEC 61360, etc., automated data interoperability based on various IMs is recognized as an urgent problem. In this paper, we propose an approach to automate the data interoperability, i.e. data exchange among similar entities in different IMs. First, similarity scores among entities are calculated based on their syntactic and semantic features. Then, in order to precisely get similar candidates to exchange data, a concept of class distance calculated with a Virtual Distance Graph (VDG) is proposed to narrow down obtained similar properties for data exchange. Through analyzing the results of a case study, the class distance based on VDG can effectively improve the precisions of calculated similar properties. Furthermore, data exchange rules can be generated automatically. The results reveal that the approach of this research can efficiently contribute to resolving the data interoperability problem.
    BibTeX
    @article{wlan-ijseke202103,
        author = {Lan Wang and Shinpei Hayashi and Motoshi Saeki},
        title = {Applying Class Distance to Decide Similarity on Information Models for Automated Data Interoperability},
        journal = {International Journal of Software Engineering and Knowledge Engineering},
        volume = 31,
        number = 3,
        pages = {405--434},
        year = 2021,
        month = {mar},
    }
    [wlan-ijseke202103]: as a page
  8. Katsuhisa Maruyama, Shinpei Hayashi, Takayuki Omori: "ChangeMacroRecorder: Accurate Recording of Fine-Grained Textual Changes of Source Code". IEICE Transactions on Information and Systems, vol. E103-D, no. 11, pp. 2262-2277. nov, 2020.
    ID
    DOI: 10.1587/transinf.2020EDK0001
    Abstract
    Recording source code changes comes to be well recognized as an effective means for understanding the evolution of existing software and making its future changes efficient. Therefore, modern integrated development environments (IDEs) tend to employ tools that record fine-grained textual changes of source code. However, there is still no satisfactory tool that accurately records textual changes. We propose ChangeMacroRecorder that automatically and silently records all textual changes of source code and in real time correlates those textual changes with actions causing them while a programmer is writing and modifying it on the Eclipse’s Java editor. The improvement with respect to the accuracy of recorded textual changes enables both programmers and researchers to exactly understand how the source code was evolved. This paper presents detailed information on how ChangeMacroRecorder achieves the accurate recording of textual changes and demonstrates how accurate textual changes were recorded in our experiment consisting of nine programming tasks.
    BibTeX
    @article{maruyama-ieicet202011,
        author = {Katsuhisa Maruyama and Shinpei Hayashi and Takayuki Omori},
        title = {{ChangeMacroRecorder}: Accurate Recording of Fine-Grained Textual Changes of Source Code},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E103-D},
        number = 11,
        pages = {2262--2277},
        year = 2020,
        month = {nov},
    }
    [maruyama-ieicet202011]: as a page
  9. Yoshiki Higo, Shinpei Hayashi, Shinji Kusumoto: "On Tracking Java Methods with Git Mechanisms". Journal of Systems and Software, vol. 165, no. 110571, pp. 1-13. jul, 2020.
    ID
    DOI: 10.1016/j.jss.2020.110571
    Abstract
    Method-level historical information is useful in various research on mining software repositories such as fault-prone module detection or evolutionary coupling identification. An existing technique named Historage converts a Git repository of a Java project to a finer-grained one. In a finer-grained repository, each Java method exists as a single file. Treating Java methods as files has an advantage, which is that Java methods can be tracked with Git mechanisms. The biggest benefit of tracking methods with Git mechanisms is that it can easily connect with any other tools and techniques build on Git infrastructure. However, Historage's tracking has an issue of accuracy, especially on small methods. More concretely, in the case that a small method is renamed or moved to another class, Historage has a limited capability to track the method. In this paper, we propose a new technique, FinerGit, to improve the trackability of Java methods with Git mechanisms. We implement FinerGit as a system and apply it to 182 open source software projects, which include 1,768K methods in total. The experimental results show that our tool has a higher capability of tracking methods in the case that methods are renamed or moved to other classes.
    BibTeX
    @article{higo-jss202007,
        author = {Yoshiki Higo and Shinpei Hayashi and Shinji Kusumoto},
        title = {On Tracking {Java} Methods with {Git} Mechanisms},
        journal = {Journal of Systems and Software},
        volume = 165,
        number = 110571,
        pages = {1--13},
        year = 2020,
        month = {jul},
    }
    [higo-jss202007]: as a page
  10. Yoshiki Higo, Shinpei Hayashi, Hideaki Hata, Meiyappan Nagappan: "Ammonia: An Approach for Deriving Project-Specific Bug Patterns". Empirical Software Engineering, vol. 25, no. 3, pp. 1951-1979. mar, 2020.
    ID
    DOI: 10.1007/s10664-020-09807-w
    Abstract
    Finding and fixing buggy code is an important and cost-intensive maintenance task, and static analysis (SA) is one of the methods developers use to perform it. SA tools warn developers about potential bugs by scanning their source code for commonly occurring bug patterns, thus giving those developers opportunities to fix the warnings (potential bugs) before they release the software. Typically, SA tools scan for general bug patterns that are common to any software project (such as null pointer dereference), and not for project specific patterns. However, past research has pointed to this lack of customizability as a severe limiting issue in SA. Accordingly, in this paper, we propose an approach called Ammonia, which is based on statically analyzing changes across the development history of a project, as a means to identify project-specific bug patterns. Furthermore, the bug patterns identified by our tool do not relate to just one developer or one specific commit, they reflect the project as a whole and compliment the warnings from other SA tools that identify general bug patterns. Herein, we report on the application of our implemented tool and approach to four Java projects: Ant, Camel, POI, and Wicket. The results obtained show that our tool could detect 19 project specific bug patterns across those four projects. Next, through manual analysis, we determined that six of those change patterns were actual bugs and submitted pull requests based on those bug patterns. As a result, five of the pull requests were merged.
    BibTeX
    @article{higo-emse202003,
        author = {Yoshiki Higo and Shinpei Hayashi and Hideaki Hata and Meiyappan Nagappan},
        title = {{Ammonia}: An Approach for Deriving Project-Specific Bug Patterns},
        journal = {Empirical Software Engineering},
        volume = 25,
        number = 3,
        pages = {1951--1979},
        year = 2020,
        month = {mar},
    }
    [higo-emse202003]: as a page
  11. Bushra Aloraini, Meiyappan Nagappan, Daniel M. German, Shinpei Hayashi, Yoshiki Higo: "An Empirical Study of Security Warnings from Static Application Security Testing Tools". Journal of Systems and Software, vol. 158, pp. 1-25. dec, 2019.
    ID
    DOI: 10.1016/j.jss.2019.110427
    Abstract
    The Open Web Application Security Project (OWASP) defines Static Application Security Testing (SAST) tools as those that can help find security vulnerabilities in the source code or compiled code of software. Such tools detect and classify the vulnerability warnings into one of many types (e.g., input validation and representation). It is well known that these tools produce high numbers of false positive warnings. However, what is not known is if specific types of warnings have a higher predisposition to be false positives or not. Therefore, our goal is to investigate the different types of SAST-produced warnings and their evolution over time to determine if one type of warning is more likely to have false positives than others. To achieve our goal, we carry out a large empirical study where we examine 116 large and popular C++ projects using six different state-of-the-art open source and commercial SAST tools that detect security vulnerabilities. In order to track a piece of code that has been tagged with a warning, we use a new state of the art framework called cregit+ that traces source code lines across different commits. The results demonstrate the potential of using SAST tools as an assessment tool to measure the quality of a product and the possible risks without manually reviewing the warnings. In addition, this work shows that pattern-matching static analysis technique is a very powerful method when combined with other advanced analysis methods.
    BibTeX
    @article{bushra-jss201912,
        author = {Bushra Aloraini and Meiyappan Nagappan and Daniel M. German and Shinpei Hayashi and Yoshiki Higo},
        title = {An Empirical Study of Security Warnings from Static Application Security Testing Tools},
        journal = {Journal of Systems and Software},
        volume = 158,
        pages = {1--25},
        year = 2019,
        month = {dec},
    }
    [bushra-jss201912]: as a page
  12. Aoi Takahashi and Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki: "Using Code Smells to Improve Information Retrieval-Based Bug Localization" (in Japanese). IPSJ Journal, vol. 60, no. 4, pp. 1040-1050. apr, 2019. recieved 特選論文.
    URL
    http://id.nii.ac.jp/1001/00195410/
    Abstract
    Bug localization is a technique that has been proposed to support the process of identifying the locations of bugs speci ed in a bug report. For example, information retrieval (IR)-based bug localization approaches suggest potential locations of the bug based on the similarity between the bug description and the source code. However, while many approaches have been proposed to improve the accuracy, the likelihood of each module having a bug is often overlooked or they are treated equally, whereas this may not be the case. For example, modules having code smells have been found to be more prone to changes and bugs. Therefore, in this paper, we propose a technique to leverage code smells to improve bug localization. By combining the code smell severity with the textual similarity from IR-based bug localization, we can identify the modules that are not only similar to the bug description but also have a higher likelihood of containing bugs. Our case study on four open source projects shows that our technique can improve the baseline IR-based approach by 22\% and 137\% on average for class and method levels, respectively. In addition, we conducted investigations concerning the effect of code smell on bug localization.
    BibTeX
    @article{takahashi-a-at-ipsjj201904,
        author = {Aoi Takahashi and Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {Using Code Smells to Improve Information Retrieval-Based Bug Localization},
        journal = {IPSJ Journal},
        volume = 60,
        number = 4,
        pages = {1040--1050},
        year = 2019,
        month = {apr},
    }
    [takahashi-a-at-ipsjj201904]: as a page
  13. Shinpei Hayashi, Fumiki Minami, Motoshi Saeki: "Detecting Architectural Violations Using Responsibility and Dependency Constraints of Components". IEICE Transactions on Information and Systems, vol. E101-D, no. 7, pp. 1780-1789. jul, 2018.
    ID
    DOI: 10.1587/transinf.2017KBP0023
    Abstract
    Utilizing software architecture patterns is important for reducing maintenance costs. However, maintaining code according to the constraints defined by the architecture patterns is time-consuming work. As described herein, we propose a technique to detect code fragments that are incompliant to the architecture as fine-grained architectural violations. For this technique, the dependence graph among code fragments extracted from the source code and the inference rules according to the architecture are the inputs. A set of candidate components to which a code fragment can be affiliated is attached to each node of the graph and is updated step-by-step. The inference rules express the components' responsibilities and dependency constraints. They remove candidate components of each node that do not satisfy the constraints from the current estimated state of the surrounding code fragment. If the inferred role of a code fragment does not include the component that the code fragment currently belongs to, then it is detected as a violation. We have implemented our technique for the Model-View-Controller for Web Application architecture pattern. By applying the technique to web applications implemented using Play Framework, we obtained accurate detection results. We also investigated how much does each inference rule contribute to the detection of violations.
    BibTeX
    @article{hayashi-ieicet201807,
        author = {Shinpei Hayashi and Fumiki Minami and Motoshi Saeki},
        title = {Detecting Architectural Violations Using Responsibility and Dependency Constraints of Components},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E101-D},
        number = 7,
        pages = {1780--1789},
        year = 2018,
        month = {jul},
    }
    [hayashi-ieicet201807]: as a page
  14. Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "An Investigative Study on How Developers Filter and Prioritize Code Smells". IEICE Transactions on Information and Systems, vol. E101-D, no. 7, pp. 1733-1742. jul, 2018.
    ID
    DOI: 10.1587/transinf.2017KBP0006
    Abstract
    Code smells are indicators of design flaws or problems in the source code. Various tools and techniques have been proposed for detecting code smells. These tools generally detect a large number of code smells, so approaches have also been developed for prioritizing and filtering code smells. However, lack of empirical data detailing how developers filter and prioritize code smells hinders improvements to these approaches. In this study, we investigated ten professional developers to determine the factors they use for filtering and prioritizing code smells in an open source project under the condition that they complete a list of five tasks. In total, we obtained 69 responses for code smell filtration and 50 responses for code smell prioritization from the ten professional developers. We found that Task relevance and Smell severity were most commonly considered during code smell filtration, while Module importance and Task relevance were employed most often for code smell prioritization. These results may facilitate further research into code smell detection, prioritization, and filtration to better focus on the actual needs of developers.
    BibTeX
    @article{natthawute-ieicet201807,
        author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {An Investigative Study on How Developers Filter and Prioritize Code Smells},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E101-D},
        number = 7,
        pages = {1733--1742},
        year = 2018,
        month = {jul},
    }
    [natthawute-ieicet201807]: as a page
  15. Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "Context-Based Approach to Prioritize Code Smells for Prefactoring". Journal of Software: Evolution and Process, vol. 30, no. 6, pp. e1886:1-24. jun, 2018.
    ID
    DOI: 10.1002/smr.1886
    Abstract
    Existing techniques for detecting code smells (indicators of source code problems) do not consider the current context, which renders them unsuitable for developers who have a specific context, such as modules within their focus. Consequently, the developers must spend time identifying relevant smells. We propose a technique to prioritize code smells using the developers' context. Explicit data of the context are obtained using a list of issues extracted from an issue tracking system. We applied impact analysis to the list of issues and used the results to specify the context-relevant smells. Results show that our approach can provide developers with a list of prioritized code smells related to their current context. We conducted several empirical studies to investigate the characteristics of our technique and factors that might affect the ranking quality. Additionally, we conducted a controlled experiment with professional developers to evaluate our technique. The results demonstrate the effectiveness of our technique.
    BibTeX
    @article{natthawute-jsep201806,
        author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {Context-Based Approach to Prioritize Code Smells for Prefactoring},
        journal = {Journal of Software: Evolution and Process},
        volume = 30,
        number = 6,
        pages = {e1886:1--24},
        year = 2018,
        month = {jun},
    }
    [natthawute-jsep201806]: as a page
  16. Shinpei Hayashi and Ken Aruga and Motoshi Saeki: "reqchecker: A Tool for Detecting Problems in Japanese Requirements Specification Documents Based on IEEE 830 Quality Characteristics" (in Japanese). IEICE Transactions on Information and Systems, vol. J101-D, no. 1, pp. 57-67. jan, 2018.
    ID
    DOI: 10.14923/transinfj.2017SKP0036
    Abstract
    Some requirements specification documents have several problems such as the ambiguity of sentences because they are mainly written in natural language. It is important for requirements analysts to find and analyze these problems. In this paper, we propose a technique for detecting problems in a requirements specification documents based on the quality characteristics defined in IEEE 830, using the syntactical structure of the specification. Our technique analyzes the structure and relationships of the sentences and the whole of the given specification. A specification checker named reqchecker that automates our technique can support to find the problems over six quality characteristics. The preliminary evaluation results show that reqchecker has acceptable detection accuracy and high supporting effects for some particular quality characteristics.
    BibTeX
    @article{hayashi-ieicet201801,
        author = {Shinpei Hayashi and Ken Aruga and Motoshi Saeki},
        title = {reqchecker: A Tool for Detecting Problems in Japanese Requirements Specification Documents Based on IEEE 830 Quality Characteristics},
        journal = {IEICE Transactions on Information and Systems},
        volume = {J101-D},
        number = 1,
        pages = {57--67},
        year = 2018,
        month = {jan},
    }
    [hayashi-ieicet201801]: as a page
  17. Mohamed Wiem Mkaouer, Marouane Kessentini, Mel Ó Cinnéide, Shinpei Hayashi, Kalyanmoy Deb: "A Robust Multi-Objective Approach to Balance Severity and Importance of Refactoring Opportunities". Empirical Software Engineering, vol. 22, no. 2, pp. 894-927. apr, 2017.
    ID
    DOI: 10.1007/s10664-016-9426-8
    Abstract
    Refactoring large systems involves several sources of uncertainty related to the severity levels of code smells to be corrected and the importance of the classes in which the smells are located. Both severity and importance of identified refactoring opportunities (e.g. code smells) are difficult to estimate. In fact, due to the dynamic nature of software development, these values cannot be accurately determined in practice, leading to refactoring sequences that lack robustness. In addition, some code fragments can contain severe quality issues but they are not playing an important role in the system. To address this problem, we introduced a multi-objective robust model, based on NSGA-II, for the software refactoring problem that tries to find the best trade-off between three objectives to maximize: quality improvements, severity and importance of refactoring opportunities to be fixed. We evaluated our approach using 8 open source systems and one industrial project, and demonstrated that it is significantly better than state-of-the-art refactoring approaches in terms of robustness in all the experiments based on a variety of real-world scenarios. Our suggested refactoring solutions were found to be comparable in terms of quality to those suggested by existing approaches, better prioritization of refactoring opportunities and to carry an acceptable robustness price.
    BibTeX
    @article{mkaouer-emse201704,
        author = {Mohamed Wiem Mkaouer and Marouane Kessentini and Mel {\'{O}} Cinn{\'{e}}ide and Shinpei Hayashi and Kalyanmoy Deb},
        title = {A Robust Multi-Objective Approach to Balance Severity and Importance of Refactoring Opportunities},
        journal = {Empirical Software Engineering},
        volume = 22,
        number = 2,
        pages = {894--927},
        year = 2017,
        month = {apr},
    }
    [mkaouer-emse201704]: as a page
  18. Shinpei Hayashi and Takuto Yanagida and Motoshi Saeki and Hidenori Mimura: "Formalizing Class Responsibility Assignment as Fuzzy Constraint Satisfaction Problem" (in Japanese). IPSJ Journal, vol. 58, no. 4, pp. 795-806. apr, 2017.
    URL
    http://id.nii.ac.jp/1001/00178568/
    Abstract
    The authors formulate the class responsibility assignment (CRA) problem as the fuzzy constraint satisfaction problem (FCSP) to automate CRA, and show the results of automatic assignments of examples. Responsibilities are contracts or obligations of objects that they should assume; by aligning them to classes appropriately, designs of high quality realize. Typical aspects of a desirable design are having a low coupling between highly cohesive classes. However, because of a trade-off among such aspects, solutions that satisfy the conditions moderately are desired, and computer assistance is needed. The authors represent the conditions of such aspects as fuzzy constraints, and formulate CRA as FCSP. That enables us to apply common algorithms that solve FCSP to the problem, and to derive solutions representing a CRA.
    BibTeX
    @article{hayashi-ipsjj201704,
        author = {Shinpei Hayashi and Takuto Yanagida and Motoshi Saeki and Hidenori Mimura},
        title = {Formalizing Class Responsibility Assignment as Fuzzy Constraint Satisfaction Problem},
        journal = {IPSJ Journal},
        volume = 58,
        number = 4,
        pages = {795--806},
        year = 2017,
        month = {apr},
    }
    [hayashi-ipsjj201704]: as a page
  19. Hiroshi Kazato and Shinpei Hayashi and Tsuyoshi Oshima and Takashi Kobayashi and Katsuyuki Natsukawa and Takashi Hoshino and Motoshi Saeki: "Cross-layer Feature Location" (in Japanese). IPSJ Journal, vol. 58, no. 4, pp. 885-897. apr, 2017.
    URL
    http://id.nii.ac.jp/1001/00178576/
    Abstract
    In multi-layer systems such as web applications, locating features is a challenging problem because each feature is often realized through a collaboration of program elements belonging to different layers. This paper proposes a semi-automatic technique to extract correspondence between features and program elements among layers, by merging execution traces of every layer to feed into formal concept analysis. By applying this technique to a web application, not only modules in the application layer but also web pages in the presentation layer and table accesses in the data layer can be associated with features at once. To show the feasibility of our technique, we applied it to a web application which conforms to the typical three-layer architecture of Java EE and discuss its applicability to other layer systems in the real world.
    BibTeX
    @article{kazato-ipsjj201704,
        author = {Hiroshi Kazato and Shinpei Hayashi and Tsuyoshi Oshima and Takashi Kobayashi and Katsuyuki Natsukawa and Takashi Hoshino and Motoshi Saeki},
        title = {Cross-layer Feature Location},
        journal = {IPSJ Journal},
        volume = 58,
        number = 4,
        pages = {885--897},
        year = 2017,
        month = {apr},
    }
    [kazato-ipsjj201704]: as a page
  20. Junzo Kato and Motoshi Saeki and Atsushi Ohnishi and Haruhiko Kaiya and Shinpei Hayashi and Shuichiro Yamamoto: "Supporting Construction of a Thesaurus for Requirements Elicitation" (in Japanese). IPSJ Journal, vol. 57, no. 7, pp. 1576-1589. jul, 2016.
    URL
    http://id.nii.ac.jp/1001/00169441/
    Abstract
    We propose a method of developing a thesaurus for requirements elicitation and its supporting tool. This proposed method consists of two parts - (1) elicitation of candidates of functional requirements to be registered in the thesaurus from technical documents and (2) registration of functional requirements with associated non-functional factors in the thesaurus from these candidates under the direction of domain experts. Our tool supports the first part. This method should satisfy the following two characteristics - (a) extracted functions are correct and (b) any analyst can extract all indispensable functions from technical documents. We show the above two characteristics through case studies and confirm the usability and effectiveness of the proposed method.
    BibTeX
    @article{jkato-ipsjj201607,
        author = {Junzo Kato and Motoshi Saeki and Atsushi Ohnishi and Haruhiko Kaiya and Shinpei Hayashi and Shuichiro Yamamoto},
        title = {Supporting Construction of a Thesaurus for Requirements Elicitation},
        journal = {IPSJ Journal},
        volume = 57,
        number = 7,
        pages = {1576--1589},
        year = 2016,
        month = {jul},
    }
    [jkato-ipsjj201607]: as a page
  21. Katsuhisa Maruyama, Takayuki Omori, Shinpei Hayashi: "Slicing Fine-Grained Code Change History". IEICE Transactions on Information and Systems, vol. E99-D, no. 3, pp. 671-687. mar, 2016.
    ID
    DOI: 10.1587/transinf.2015EDP7282
    Abstract
    Change-aware development environments can automatically record fine-grained code changes on a program and allow programmers to replay the recorded changes in chronological order. However, since they do not always need to replay all the code changes to investigate how a particular entity of the program has been changed, they often eliminate several code changes of no interest by manually skipping them in replaying. This skipping action is an obstacle that makes many programmers hesitate when they use existing replaying tools. This paper proposes a slicing mechanism that automatically removes manually skipped code changes from the whole history of past code changes and extracts only those necessary to build a particular class member of a Java program. In this mechanism, fine-grained code changes are represented by edit operations recorded on the source code of a program and dependencies among edit operations are formalized. The paper also presents a running tool that slices the operation history and replays its resulting slices. With this tool, programmers can avoid replaying nonessential edit operations for the construction of class members that they want to understand. Experimental results show that the tool offered improvements over conventional replaying tools with respect to the reduction of the number of edit operations needed to be examined and over history filtering tools with respect to the accuracy of edit operations to be replayed.
    BibTeX
    @article{maruyama-ieicet201603,
        author = {Katsuhisa Maruyama and Takayuki Omori and Shinpei Hayashi},
        title = {Slicing Fine-Grained Code Change History},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E99-D},
        number = 3,
        pages = {671--687},
        year = 2016,
        month = {mar},
    }
    [maruyama-ieicet201603]: as a page
  22. Teppei Kato and Shinpei Hayashi and Motoshi Saeki: "Combining Dynamic Feature Location with Call Graph Separation" (in Japanese). IEICE Transactions on Information and Systems, vol. J98-D, no. 11, pp. 1374-1376. nov, 2015.
    ID
    DOI: 10.14923/transinfj.2015SSL0001
    Abstract
    形式概念分析を用いた動的な機能捜索手法と呼び出し関係グラフ分割手法を組み合わせ,シナリオの用意が十分でない場合でも精度良く機能に対応するモジュール集合を得る方法について,例題への適用結果に基づき検討する.
    BibTeX
    @article{kato-ieicet201511,
        author = {Teppei Kato and Shinpei Hayashi and Motoshi Saeki},
        title = {Combining Dynamic Feature Location with Call Graph Separation},
        journal = {IEICE Transactions on Information and Systems},
        volume = {J98-D},
        number = 11,
        pages = {1374--1376},
        year = 2015,
        month = {nov},
    }
    [kato-ieicet201511]: as a page
  23. Eunjong Choi and Kenji Fujiwara and Norihiro Yoshida and Shinpei Hayashi: "A Survey of Refactoring Detection Techniques Based on Change History Analysis" (in Japanese). Computer Software, vol. 32, no. 1, pp. 47-59. feb, 2015.
    ID
    DOI: 10.11309/jssst.32.1_47
    Abstract
    Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure. Not only researchers but also practitioners need to know past instances of refactoring performed in a software development project. So far, a number of techniques have been proposed on the automatic detection of refactoring instances. Those techniques have been presented in various international conferences and journals, and it is difficult for researchers and practitioners to grasp the current status of studies on refactoring detection techniques. In this survey paper, we introduce refactoring detection techniques, especially in techniques based on change history analysis. At first, we give the definition and the categorization of refactoring detection in this paper, and then introduce refactoring detection techniques based on change history analysis. Finally, we discuss possible future research directions on refactoring detection.
    BibTeX
    @article{choi-jssst201502,
        author = {Eunjong Choi and Kenji Fujiwara and Norihiro Yoshida and Shinpei Hayashi},
        title = {A Survey of Refactoring Detection Techniques Based on Change History Analysis},
        journal = {Computer Software},
        volume = 32,
        number = 1,
        pages = {47--59},
        year = 2015,
        month = {feb},
    }
    [choi-jssst201502]: as a page
  24. Takayuki Omori and Shinpei Hayashi and Katsuhisa Maruyama: "A survey on methods of recording fine-grained operations on integrated development environments and their applications" (in Japanese). Computer Software, vol. 32, no. 1, pp. 60-80. feb, 2015.
    ID
    DOI: 10.11309/jssst.32.1_60
    Abstract
    This paper presents a survey on techniques to record and utilize developers’ operations on integrated development environments (IDEs). Especially, we let techniques treating fine-grained code changes be targets of this survey for reference in software evolution research. We created a three-tiered model to represent the relationships among IDEs, recording techniques, and application techniques. This paper also presents common features of the techniques and their details.
    BibTeX
    @article{omori-jssst201502,
        author = {Takayuki Omori and Shinpei Hayashi and Katsuhisa Maruyama},
        title = {A survey on methods of recording fine-grained operations on integrated development environments and their applications},
        journal = {Computer Software},
        volume = 32,
        number = 1,
        pages = {60--80},
        year = 2015,
        month = {feb},
    }
    [omori-jssst201502]: as a page
  25. Daiki Hoshino and Shinpei Hayashi and Motoshi Saeki: "Automated Grouping of Editing Operations of Source Code" (in Japanese). Computer Software, vol. 31, no. 3, pp. 277-283. aug, 2014.
    ID
    DOI: 10.11309/jssst.31.3_277
    Abstract
    In software configuration management, it is important to separate source code changes into meaningful units before committing them (in short, Task Level Commit). However, developers often commit unrelated code changes in a single transaction. To support Task Level Commit, an existing technique uses an editing history of source code and enables developers to group the editing operations in the history. This paper proposes an automated technique for grouping editing operations in a history based on several criteria including source files, classes, methods, comments, and times editted. We show how our technique reduces developers' separating cost compared with the manual approach.
    BibTeX
    @article{dhoshino-jssst201408,
        author = {Daiki Hoshino and Shinpei Hayashi and Motoshi Saeki},
        title = {Automated Grouping of Editing Operations of Source Code},
        journal = {Computer Software},
        volume = 31,
        number = 3,
        pages = {277--283},
        year = 2014,
        month = {aug},
    }
    [dhoshino-jssst201408]: as a page
  26. Takanori Ugai and Shinpei Hayashi and Motoshi Saeki: "Quality Properties of Goals in an Attributed Goal Graph" (in Japanese). IPSJ Journal, vol. 55, no. 2, pp. 893-908. feb, 2014.
    URL
    http://id.nii.ac.jp/1001/00098488/
    Abstract
    Goal-oriented requirements analysis (GORA) is a promising technique in requirements engineering, especially requirements elicitation. This paper aims at developing a technique to support the improvement of goal graphs, which are resulting artifacts of GORA. We consider that the technique of improving existing goals of lower quality is more realistic rather than that of creating a goal graph of high quality from scratch. To achieve the proposed technique, we define quality properties for each goal formally. Our quality properties result from IEEE Std 830 and past related studies. To define them formally, using attribute values of an attributed goal graph, we formulate predicates for deciding if a goal satisfies a quality property or not. We have implemented a supporting tool to show a requirements analyst the goals which do not satisfy the predicates. Our experiments using the tool show that requirements analysts can efficiently find and modify the qualitatively problematic goals.
    BibTeX
    @article{ugai-ipsjj201402,
        author = {Takanori Ugai and Shinpei Hayashi and Motoshi Saeki},
        title = {Quality Properties of Goals in an Attributed Goal Graph},
        journal = {IPSJ Journal},
        volume = 55,
        number = 2,
        pages = {893--908},
        year = 2014,
        month = {feb},
    }
    [ugai-ipsjj201402]: as a page
  27. Motoshi Saeki, Shinpei Hayashi, Haruhiko Kaiya: "Enhancing Goal-Oriented Security Requirements Analysis Using Common Criteria-Based Knowledge". International Journal of Software Engineering and Knowledge Engineering, vol. 23, no. 5, pp. 695-720. jun, 2013.
    ID
    DOI: 10.1142/S0218194013500174
    Abstract
    Goal-oriented requirements analysis (GORA) is one of the promising techniques to elicit software requirements, and it is natural to consider its application to security requirements analysis. In this paper, we proposed a method for goal-oriented security requirements analysis using security knowledge which is derived from several security targets (STs) compliant to Common Criteria (CC, ISO/IEC 15408). We call such knowledge security ontology for an application domain (SOAD). Three aspects of security such as confidentiality, integrity and availability are included in the scope of our method because the CC addresses these three aspects. We extract security-related concepts such as assets, threats, countermeasures and their relationships from STs, and utilize these concepts and relationships for security goal elicitation and refinement in GORA. The usage of certificated STs as knowledge source allows us to reuse efficiently security-related concepts of higher quality. To realize our proposed method as a supporting tool, we use an existing method GOORE (goal-oriented and ontology-driven requirements elicitation method) combining with SOAD. In GOORE, terms and their relationships in a domain ontology play an important role of semantic processing such as goal refinement and conflict identification. SOAD is defined based on concepts in STs. In contrast with other goal-oriented security requirements methods, the knowledge derived from actual STs contributes to eliciting security requirements in our method. In addition, the relationships among the assets, threats, objectives and security functional requirements can be directly reused for the refinement of security goals. We show an illustrative example to show the usefulness of our method and evaluate the method in comparison with other goal-oriented security requirements analysis methods.
    BibTeX
    @article{saeki-ijseke201306,
        author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya},
        title = {Enhancing Goal-Oriented Security Requirements Analysis Using Common Criteria-Based Knowledge},
        journal = {International Journal of Software Engineering and Knowledge Engineering},
        volume = 23,
        number = 5,
        pages = {695--720},
        year = 2013,
        month = {jun},
    }
    [saeki-ijseke201306]: as a page
  28. Takayuki Omori and Katsuhisa Maruyama and Shinpei Hayashi and Atsushi Sawada: "A Literature Review on Software Evolution Research" (in Japanese). Computer Software, vol. 29, no. 3, pp. 3-28. aug, 2012.
    ID
    DOI: 10.11309/jssst.29.3_3
    Abstract
    Software must be continually evolved to keep up with users’ needs. In this article, we propose a new taxonomy of software evolution. It consists of three perspectives: methods, targets, and objectives of evolution. We also present a literature review on software evolution based on our taxonomy. The result could provide a concrete baseline in discussing research trends and directions in the field of software evolution.
    BibTeX
    @article{omori-jssst201208,
        author = {Takayuki Omori and Katsuhisa Maruyama and Shinpei Hayashi and Atsushi Sawada},
        title = {A Literature Review on Software Evolution Research},
        journal = {Computer Software},
        volume = 29,
        number = 3,
        pages = {3--28},
        year = 2012,
        month = {aug},
    }
    [omori-jssst201208]: as a page
  29. Takanori Ugai and Shinpei Hayashi and Motoshi Saeki: "A Supporting Tool to Identify Stakeholders' Imbalance and Lack in Requirements Analysis" (in Japanese). IPSJ Journal, vol. 53, no. 4, pp. 1448-1460. apr, 2012.
    URL
    http://id.nii.ac.jp/1001/00081787/
    Abstract
    Software requirements elicitation is a cooperative work by stakeholders. It is important for project managers and analysts to understand stakeholder concerns and to identify potential problems such as imbalance or lack of stakeholders. This paper presents a technique and a tool which visualize the strength of stakeholders' interest of concerns on two dimensional screens. The tool generates anchored maps from an attributed goal graph by AGORA, which is an extended version of goal-oriented analysis methods. It has stakeholders' interest to concerns and its degree as the attributes of goals. Additionally an experimental evaluation is described, whose results show the user of the tool could identify imbalance and lack of stakeholders more accurately in shorter time than the case with a table of stakeholders and requirements.
    BibTeX
    @article{ugai-ipsjj201204,
        author = {Takanori Ugai and Shinpei Hayashi and Motoshi Saeki},
        title = {A Supporting Tool to Identify Stakeholders' Imbalance and Lack in Requirements Analysis},
        journal = {IPSJ Journal},
        volume = 53,
        number = 4,
        pages = {1448--1460},
        year = 2012,
        month = {apr},
    }
    [ugai-ipsjj201204]: as a page
  30. Shinpei Hayashi, Daisuke Tanabe, Haruhiko Kaiya, Motoshi Saeki: "Impact Analysis on an Attributed Goal Graph". IEICE Transactions on Information and Systems, vol. E95-D, no. 4, pp. 1012-1020. apr, 2012.
    ID
    DOI: 10.1587/transinf.E95.D.1012
    Abstract
    Requirements changes frequently occur at any time of a software development process, and their management is a crucial issue to develop software of high quality. Meanwhile, goal-oriented analysis techniques are being put into practice to elicit requirements. In this situation, the change management of goal graphs and its support are necessary. This paper presents a technique related to the change management of goal graphs, realizing impact analysis on a goal graph when its modifications occur. Our impact analysis detects conflicts that arise when a new goal is added, and investigates the achievability of the other goals when an existing goal is deleted. We have implemented a supporting tool for automating the analysis. Two case studies suggested the efficiency of the proposed approach.
    BibTeX
    @article{hayashi-ieicet201204,
        author = {Shinpei Hayashi and Daisuke Tanabe and Haruhiko Kaiya and Motoshi Saeki},
        title = {Impact Analysis on an Attributed Goal Graph},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E95-D},
        number = 4,
        pages = {1012--1020},
        year = 2012,
        month = {apr},
    }
    [hayashi-ieicet201204]: as a page
  31. Haruhiko Kaiya and Yuutarou Shimizu and Hirotaka Yasui and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki: "Enhancing Domain Knowledge for Requirements Elicitation with Web Mining" (in Japanese). IPSJ Journal, vol. 53, no. 2, pp. 495-509. feb, 2012.
    URL
    http://id.nii.ac.jp/1001/00080661/
    Abstract
    Software engineers require knowledge about a problem domain when they elicit requirements for a system about the domain. Explicit descriptions about such knowledge such as domain ontology contribute to eliciting such requirements correctly and completely. Methods for eliciting requirements using ontology have been thus proposed, and such ontology is normally developed based on documents and/or experts in the problem domain. However, it is not easy for engineers to elicit requirements correctly and completely only with such domain ontology because they are not normally experts in the problem domain. In this paper, we propose a method and a tool to enhance domain ontology using Web mining. Our method and the tool help engineers to add additional knowledge suitable for them to understand domain ontology. According to our method, candidates of such additional knowledge are gathered from Web pages using keywords in existing domain ontology. The candidates are then prioritized based on the degree of the relationship between each candidate and existing ontology and on the frequency and the distribution of the candidate over Web pages. Engineers finally add new knowledge to existing ontology out of these prioritized candidates. We also show an experiment and its results for confirming enhanced ontology enables engineers to elicit requirements more completely and correctly than existing ontology does.
    BibTeX
    @article{kaiya-ipsjj201202,
        author = {Haruhiko Kaiya and Yuutarou Shimizu and Hirotaka Yasui and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki},
        title = {Enhancing Domain Knowledge for Requirements Elicitation with Web Mining},
        journal = {IPSJ Journal},
        volume = 53,
        number = 2,
        pages = {495--509},
        year = 2012,
        month = {feb},
    }
    [kaiya-ipsjj201202]: as a page
  32. Shinpei Hayashi and Katsuyuki Sekine and Motoshi Saeki: "Interactive Support for Understanding Feature Implementation with Feature Location" (in Japanese). IPSJ Journal, vol. 53, no. 2, pp. 578-589. feb, 2012.
    URL
    http://id.nii.ac.jp/1001/00080669/
    Abstract
    This paper proposes an interactive approach for efficiently understanding a feature implementation by applying feature location (FL). Although existing FL techniques can reduce the understanding cost, it is still an open issue to construct the appropriate inputs for the techniques. In our approach, the inputs of FL are incrementally improved by interactions between users and the FL system. By understanding a code fragment obtained using FL, users can find more appropriate queries from the identifiers in the fragment. Furthermore, the relevance feedback, obtained by partially judging whether or not a code fragment is required to understand, improves the evaluation score of FL. Users can then obtain more accurate results. We have implemented a supporting tool of our approach. Evaluation results using the tool show that our interactive approach is feasible and that it can reduce the understanding cost more effectively than the non-interactive approach.
    BibTeX
    @article{hayashi-ipsjj201202,
        author = {Shinpei Hayashi and Katsuyuki Sekine and Motoshi Saeki},
        title = {Interactive Support for Understanding Feature Implementation with Feature Location},
        journal = {IPSJ Journal},
        volume = 53,
        number = 2,
        pages = {578--589},
        year = 2012,
        month = {feb},
    }
    [hayashi-ipsjj201202]: as a page
  33. Rodion Moiseev, Shinpei Hayashi, Motoshi Saeki: "Using Hierarchical Transformation to Generate Assertion Code from OCL Constraints". IEICE Transactions on Information and Systems, vol. E94-D, no. 3, pp. 612-621. mar, 2011.
    ID
    DOI: 10.1587/transinf.E94.D.612
    Abstract
    Object Constraint Language (OCL) is frequently applied in software development for stipulating formal constraints on software models. Its platform-independent characteristic allows for wide usage during the design phase. However, application in platform-specific processes, such as coding, is less obvious because it requires usage of bespoke tools for that platform. In this paper we propose an approach to generate assertion code for OCL constraints for multiple platform specific languages, using a unified framework based on structural similarities of programming languages. We have succeeded in automating the process of assertion code generation for four different languages using our tool. To show effectiveness of our approach in terms of development effort, an experiment was carried out and summarised.
    BibTeX
    @article{rodion-ieicet201103,
        author = {Rodion Moiseev and Shinpei Hayashi and Motoshi Saeki},
        title = {Using Hierarchical Transformation to Generate  Assertion Code from OCL Constraints},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E94-D},
        number = 3,
        pages = {612--621},
        year = 2011,
        month = {mar},
    }
    [rodion-ieicet201103]: as a page
  34. Hiroshi Kazato and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki: "Choosing Software Implementation Technologies Using Bayesian Networks" (in Japanese). IPSJ Journal, vol. 51, no. 9, pp. 1765-1776. sep, 2010.
    URL
    http://id.nii.ac.jp/1001/00070349/
    Abstract
    It is difficult to estimate how a combination of implementation technologies influences quality attributes on an entire system. In this paper, we propose a technique to choose implementation technologies by modeling casual dependencies between requirements and technoloies probabilistically using Bayesian networks. We have implemented our technique on a Bayesian network tool and applied it to a case study of a business application to show its effectiveness.
    BibTeX
    @article{kazato-ipsjj201009,
        author = {Hiroshi Kazato and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki},
        title = {Choosing Software Implementation Technologies Using Bayesian Networks},
        journal = {IPSJ Journal},
        volume = 51,
        number = 9,
        pages = {1765--1776},
        year = 2010,
        month = {sep},
    }
    [kazato-ipsjj201009]: as a page
  35. Takashi Kobayashi and Shinpei Hayashi: "Recent Researches for Supporting Software Construction and Maintenance with Data Mining" (in Japanese). Computer Software, vol. 27, no. 3, pp. 13-23. aug, 2010.
    ID
    DOI: 10.11309/jssst.27.3_13
    Abstract
    This paper discusses recent studies on technologies for supporting software construction and maintenance by analyzing various software engineering data. We also introduce typical data mining techniques for analyzing the data.
    BibTeX
    @article{tkobaya-jssst201008,
        author = {Takashi Kobayashi and Shinpei Hayashi},
        title = {Recent Researches for Supporting Software Construction and Maintenance with Data Mining},
        journal = {Computer Software},
        volume = 27,
        number = 3,
        pages = {13--23},
        year = 2010,
        month = {aug},
    }
    [tkobaya-jssst201008]: as a page
  36. Shinpei Hayashi and Yusuke Sasaki and Motoshi Saeki: "Evaluating Alternatives of Source Code Changes with Analytic Hierarchy Process" (in Japanese). Computer Software, vol. 27, no. 2, pp. 118-123. may, 2010.
    ID
    DOI: 10.11309/jssst.27.2_118
    Abstract
    This paper proposes a technique for selecting the most appropriate alternative of source code changes based on the commitment of a software development project by each developer of the project. In the technique, we evaluate the alternative changes by using an evaluation function with integrating multiple software metrics to suppress the influence of each developer’s subjectivity. By regarding the selection of the alternative changes as a multiple criteria decision making, we create the function with Analytic Hierarchy Process. A preliminary evaluation shows the efficiency of the technique.
    BibTeX
    @article{hayashi-jssst201005,
        author = {Shinpei Hayashi and Yusuke Sasaki and Motoshi Saeki},
        title = {Evaluating Alternatives of Source Code Changes with Analytic Hierarchy Process},
        journal = {Computer Software},
        volume = 27,
        number = 2,
        pages = {118--123},
        year = 2010,
        month = {may},
    }
    [hayashi-jssst201005]: as a page
  37. Shinpei Hayashi, Yasuyuki Tsuda, Motoshi Saeki: "Search-Based Refactoring Detection from Source Code Revisions". IEICE Transactions on Information and Systems, vol. E93-D, no. 4, pp. 754-762. apr, 2010.
    ID
    DOI: 10.1587/transinf.E93.D.754
    Abstract
    This paper proposes a technique for detecting the occurrences of refactoring from source code revisions. In a real software development process, a refactoring operation may sometimes be performed together with other modifications at the same revision. This means that detecting refactorings from the differences between two versions stored in a software version archive is not usually an easy process. In order to detect these impure refactorings, we model the detection within a graph search. Our technique considers a version of a program as a state and a refactoring as a transition between two states. It then searches for the path that approaches from the initial to the final state. To improve the efficiency of the search, we use the source code differences between the current and the final state for choosing the candidates of refactoring to be applied next and estimating the heuristic distance to the final state. Through case studies, we show that our approach is feasible to detect combinations of refactorings.
    BibTeX
    @article{hayashi-ieicet201004,
        author = {Shinpei Hayashi and Yasuyuki Tsuda and Motoshi Saeki},
        title = {Search-Based Refactoring Detection from Source Code Revisions},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E93-D},
        number = 4,
        pages = {754--762},
        year = 2010,
        month = {apr},
    }
    [hayashi-ieicet201004]: as a page
  38. Takeshi Obayashi, Shinpei Hayashi, Motoshi Saeki, Hiroyuki Ohta, Kengo Kinoshita: "ATTED-II provides coexpressed gene networks for Arabidopsis". Nucleic Acids Research, vol. 37, no. Database, pp. D987-D991. jan, 2009.
    ID
    DOI: 10.1093/nar/gkn807
    Abstract
    ATTED-II (http://atted.jp) is a database of gene coexpression in Arabidopsis that can be used to design a wide variety of experiments, including the prioritization of genes for functional identification or for studies of regulatory relationships. Here, we report updates of ATTED-II that focus especially on functionalities for constructing gene networks with regard to the following points: (i) introducing a new measure of gene coexpression to retrieve functionally related genes more accurately, (ii) implementing clickable maps for all gene networks for step-by-step navigation, (iii) applying Google Maps API to create a single map for a large network, (iv) including information about protein-protein interactions, (v) identifying conserved patterns of coexpression and (vi) showing and connecting KEGG pathway information to identify functional modules. With these enhanced functions for gene network representation, ATTED-II can help researchers to clarify the functional and regulatory networks of genes in Arabidopsis.
    BibTeX
    @article{obayashi-nar200901,
        author = {Takeshi Obayashi and Shinpei Hayashi and Motoshi Saeki and Hiroyuki Ohta and Kengo Kinoshita},
        title = {{ATTED-II} provides coexpressed gene networks for Arabidopsis},
        journal = {Nucleic Acids Research},
        volume = 37,
        number = {Database},
        pages = {D987--D991},
        year = 2009,
        month = {jan},
    }
    [obayashi-nar200901]: as a page
  39. Shinpei Hayashi, Junya Katada, Ryota Sakamoto, Takashi Kobayashi, Motoshi Saeki: "Design Pattern Detection by Using Meta Patterns". IEICE Transactions on Information and Systems, vol. E91-D, no. 4, pp. 933-944. apr, 2008.
    ID
    DOI: 10.1093/ietisy/e91-d.4.933
    Abstract
    One of the approaches to improve program understanding is to extract what kinds of design pattern are used in existing object-oriented software. This paper proposes a technique for efficiently and accurately detecting occurrences of design patterns included in source codes. We use both static and dynamic analyses to achieve the detection with high accuracy. Moreover, to reduce computation and maintenance costs, detection conditions are hierarchically specified based on Pree's meta patterns as common structures of design patterns. The usage of Prolog to represent the detection conditions enables us to easily add and modify them. Finally, we have implemented an automated tool as an Eclipse plug-in and conducted experiments with Java programs. The experimental results show the effectiveness of our approach.
    BibTeX
    @article{hayashi-ieicet200804,
        author = {Shinpei Hayashi and Junya Katada and Ryota Sakamoto and Takashi Kobayashi and Motoshi Saeki},
        title = {Design Pattern Detection by Using Meta Patterns},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E91-D},
        number = 4,
        pages = {933--944},
        year = 2008,
        month = {apr},
    }
    [hayashi-ieicet200804]: as a page
  40. Takeshi Obayashi, Shinpei Hayashi, Masayuki Shibaoka, Motoshi Saeki, Hiroyuki Ohta, Kengo Kinoshita: "COXPRESdb: a database of coexpressed gene networks in mammals". Nucleic Acids Research, vol. 36, no. Database, pp. D77-D82. jan, 2008.
    ID
    DOI: 10.1093/nar/gkm840
    Abstract
    A database of coexpressed gene sets can provide valuable information for a wide variety of experimental designs, such as targeting of genes for functional identification, gene regulation and/or protein-protein interactions. Coexpressed gene databases derived from publicly available GeneChip data are widely used in Arabidopsis research, but platforms that examine coexpression for higher mammals are rather limited. Therefore, we have constructed a new database, COXPRESdb (coexpressed gene database) (http://coxpresdb.hgc.jp), for coexpressed gene lists and networks in human and mouse. Coexpression data could be calculated for 19 777 and 21 036 genes in human and mouse, respectively, by using the GeneChip data in NCBI GEO. COXPRESdb enables analysis of the four types of coexpression networks: (i) highly coexpressed genes for every gene, (ii) genes with the same GO annotation, (iii) genes expressed in the same tissue and (iv) user-defined gene sets. When the networks became too big for the static picture on the web in GO networks or in tissue networks, we used Google Maps API to visualize them interactively. COXPRESdb also provides a view to compare the human and mouse coexpression patterns to estimate the conservation between the two species.
    BibTeX
    @article{obayashi-nar200801,
        author = {Takeshi Obayashi and Shinpei Hayashi and Masayuki Shibaoka and Motoshi Saeki and Hiroyuki Ohta and Kengo Kinoshita},
        title = {{COXPRESdb}: a database of coexpressed gene networks in mammals},
        journal = {Nucleic Acids Research},
        volume = 36,
        number = {Database},
        pages = {D77--D82},
        year = 2008,
        month = {jan},
    }
    [obayashi-nar200801]: as a page
  41. Takeshi Obayashi, Kengo Kinoshita, Kenta Nakai, Masayuki Shibaoka, Shinpei Hayashi, Motoshi Saeki, Daisuke Shibata, Kazuki Saito, Hiroyuki Ohta: "ATTED-II: a database of co-expressed genes and cis elements for identifying co-regulated gene groups in Arabidopsis". Nucleic Acids Research, vol. 35, no. Database, pp. D863-D869. jan, 2007.
    ID
    DOI: 10.1093/nar/gkl783
    Abstract
    Publicly available database of co-expressed gene sets would be a valuable tool for a wide variety of experimental designs, including targeting of genes for functional identification or for regulatory investigation. Here, we report the construction of an Arabidopsis thaliana trans-factor and cis-element prediction database (ATTED-II) that provides co-regulated gene relationships based on co-expressed genes deduced from microarray data and the predicted cis elements. ATTED-II (http://www.atted.bio.titech.ac.jp) includes the following features: (i) lists and networks of co-expressed genes calculated from 58 publicly available experimental series, which are composed of 1388 GeneChip data in A.thaliana; (ii) prediction of cis-regulatory elements in the 200 bp region upstream of the transcription start site to predict co-regulated genes amongst the co-expressed genes; and (iii) visual representation of expression patterns for individual genes. ATTED-II can thus help researchers to clarify the function and regulation of particular genes and gene networks.
    BibTeX
    @article{obayashi-nar200701,
        author = {Takeshi Obayashi and Kengo Kinoshita and Kenta Nakai and Masayuki Shibaoka and Shinpei Hayashi and Motoshi Saeki and Daisuke Shibata and Kazuki Saito and Hiroyuki Ohta},
        title = {{ATTED-II}: a database of co-expressed genes and {\it cis} elements for identifying co-regulated gene groups in {\it Arabidopsis}},
        journal = {Nucleic Acids Research},
        volume = 35,
        number = {Database},
        pages = {D863--D869},
        year = 2007,
        month = {jan},
    }
    [obayashi-nar200701]: as a page
  42. Shinpei Hayashi, Motoshi Saeki, Masahito Kurihara: "Supporting Refactoring Activities Using Histories of Program Modification". IEICE Transactions on Information and Systems, vol. E89-D, no. 4, pp. 1403-1412. apr, 2006.
    ID
    DOI: 10.1093/ietisy/e89-d.4.1403
    Abstract
    Refactoring is one of the promising techniques for improving program design by means of program transformation with preserving behavior, and is widely applied in practice. However, it is difficult for engineers to identify how and where to refactor programs, because proper knowledge and skills of a high order are required of them. In this paper, we propose the technique to instruct how and where to refactor a program by using a sequence of its modifications. We consider that the histories of program modifications reflect developers' intentions, and focusing on them allows us to provide suitable refactoring guides. Our technique can be automated by storing the correspondence of modification patterns to suitable refactoring operations. By implementing an automated supporting tool, we show its feasibility. The tool is implemented as a plug-in for Eclipse IDE. It selects refactoring operations by matching between a sequence of program modifications and modification patterns.
    BibTeX
    @article{hayashi-ieicet200604,
        author = {Shinpei Hayashi and Motoshi Saeki and Masahito Kurihara},
        title = {Supporting Refactoring Activities Using Histories of Program Modification},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E89-D},
        number = 4,
        pages = {1403--1412},
        year = 2006,
        month = {apr},
    }
    [hayashi-ieicet200604]: as a page

Research Talks Presented in International Conferences, Workshops, or Symposia

  1. Shinpei Hayashi, Takashi Kobayashi, Tadahisa Kato: "Evaluation of Cross-Lingual Bug Localization: Two Industrial Cases". In Proceedings of the 39th IEEE International Conference on Software Maintenance and Evolution (ICSME 2023), Industry Track, pp. 495-499. Bogota, Colombia, oct, 2023.
    ID
    DOI: 10.1109/ICSME58846.2023.00063
    URL
    https://conf.researchr.org/details/icsme-2023/icsme-2023-industry-track/11/Evaluation-of-Cross-Lingual-Bug-Localization-Two-Industrial-Cases
    Abstract
    This study reports the results of applying the cross-lingual bug localization approach proposed by Xia et al. to industrial software projects. To realize cross-lingual bug localization, we applied machine translation to non-English descriptions in the source code and bug reports, unifying them into English-based texts, to which an existing English-based bug localization technique was applied. In addition, a prototype tool based on BugLocator was implemented and applied to two Japanese industrial projects, which resulted in a slightly different performance from that of Xia et al.
    BibTeX
    @inproceedings{hayashi-icsme2023,
        author = {Shinpei Hayashi and Takashi Kobayashi and Tadahisa Kato},
        title = {Evaluation of Cross-Lingual Bug Localization: Two Industrial Cases},
        booktitle = {Proceedings of the 39th IEEE International Conference on Software Maintenance and Evolution},
        pages = {495--499},
        year = 2023,
        month = {oct},
    }
    [hayashi-icsme2023]: as a page
  2. Motoki Abe, Shinpei Hayashi: "RefSearch: A Search Engine for Refactoring". In Proceedings of the 39th IEEE International Conference on Software Maintenance and Evolution (ICSME 2023), Tool Demonstration Track, pp. 547-552. Bogota, Colombia, oct, 2023. recieved Best ICSME 2023 Artifact Award.
    ID
    DOI: 10.1109/ICSME58846.2023.00070
    URL
    https://conf.researchr.org/details/icsme-2023/icsme-2023-tool-demo-track/9/RefSearch-A-Search-Engine-for-Refactoring
    Abstract
    Developers often refactor source code to improve its quality during software development. A challenge in refactoring is to determine if it can be applied or not. To help with this decision-making process, we aim to search for past refactoring cases that are similar to the current refactoring scenario. We have designed and implemented a system called RefSearch that enables users to search for refactoring cases through a user-friendly query language. The system collects refactoring instances using two refactoring detectors and provides a web interface for querying and browsing the cases. We used four refactoring scenarios as test cases to evaluate the expressiveness of the query language and the search performance of the system. RefSearch is available at https://github.com/salab/refsearch.
    BibTeX
    @inproceedings{toki-icsme2023,
        author = {Motoki Abe and Shinpei Hayashi},
        title = {{RefSearch}: A Search Engine for Refactoring},
        booktitle = {Proceedings of the 39th IEEE International Conference on Software Maintenance and Evolution},
        pages = {547--552},
        year = 2023,
        month = {oct},
    }
    [toki-icsme2023]: as a page
  3. Hiroto Sugimori, Shinpei Hayashi: "Towards Fine-grained Software Change Prediction". (MSR Asia Summit 2023). Hokkaido, Japan, jul, 2023.
    BibTeX
    @misc{sugimori-msrasiasummit2023,
        author = {Hiroto Sugimori and Shinpei Hayashi},
        title = {Towards Fine-grained Software Change Prediction},
        year = 2023,
        month = {jul},
    }
    [sugimori-msrasiasummit2023]: as a page
  4. Shinpei Hayashi, Teppei Kato, Motoshi Saeki: "Locating Procedural Steps in Source Code". In Proceedings of the 47th IEEE Computer Software and Applications Conference (QUORS 2023), co-located with COMPSAC 2023, pp. 1607-1612. Torino, Italy, jun, 2023.
    ID
    DOI: 10.1109/COMPSAC57700.2023.00248
    Abstract
    Some documents, such as use case descriptions, describe features consisting of multiple concepts with following a procedural flow. Because existing feature location techniques lack a relation between concepts in such features, it is difficult to identify the concepts in the source code with high accuracy. This paper presents a technique to locate concepts in a feature described in a structured document consisting of multiple procedural steps, such as a use case description, using dependency between the concepts. We apply an existing concept location technique to descriptions of concepts and obtain a list of modules. Modules failing to match the dependency between concepts are filtered out. Then, we can obtain a more precise list of modules. The conducted experiment underscores the effectiveness of our technique.
    BibTeX
    @inproceedings{hayashi-quors2023,
        author = {Shinpei Hayashi and Teppei Kato and Motoshi Saeki},
        title = {Locating Procedural Steps in Source Code},
        booktitle = {Proceedings of the 47th IEEE Computer Software and Applications Conference},
        pages = {1607--1612},
        year = 2023,
        month = {jun},
    }
    [hayashi-quors2023]: as a page
  5. Shizuka Tsumita, Shinpei Hayashi, Sousuke Amasaki: "Large-Scale Evaluation of Method-Level Bug Localization with FinerBench4BL". In Proceedings of the 30th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2023), RENE Track, pp. 815-824. Macao SAR, China, mar, 2023.
    ID
    DOI: 10.1109/SANER56733.2023.00094
    Abstract
    Bug localization is an important aspect of software maintenance because it can locate modules that need to be changed to fix a specific bug. Although method-level bug localization is helpful for developers, there are only a few tools and techniques for this task; moreover, there is no large-scale framework for their evaluation. In this paper, we present FinerBench4BL, an evaluation framework for method-level information retrieval-based bug localization techniques, and a comparative study using this framework. This framework was semi-automatically constructed from Bench4BL, a file-level bug localization evaluation framework, using a repository transformation approach. We converted the original file-level version repositories provided by Bench4BL into method-level repositories by repository transformation. Method-level data components such as oracle methods can also be automatically derived by applying the oracle generation approach via bug-commit linking in Bench4BL to the generated method repositories. Furthermore, we tailored existing file-level bug localization technique implementations at the method level. We created a framework for method-level evaluation by merging the generated dataset and implementations. The comparison results show that the method-level techniques decreased accuracy whereas improved debugging efficiency compared to file-level techniques.
    BibTeX
    @inproceedings{tsumita-saner2023,
        author = {Shizuka Tsumita and Shinpei Hayashi and Sousuke Amasaki},
        title = {Large-Scale Evaluation of Method-Level Bug Localization with {FinerBench4BL}},
        booktitle = {Proceedings of the 30th IEEE International Conference on Software Analysis, Evolution and Reengineering},
        pages = {815--824},
        year = 2023,
        month = {mar},
    }
    [tsumita-saner2023]: as a page
  6. Yuki Osumi, Naotaka Umekawa, Hitomi Komata, Shinpei Hayashi: "Empirical Study of Co-Renamed Identifiers". In Proceedings of the 29th Asia-Pacific Software Engineering Conference (APSEC 2022), pp. 71-80. Online, dec, 2022.
    ID
    DOI: 10.1109/APSEC57359.2022.00019
    URL
    https://conf.researchr.org/details/apsec-2022/apsec-2022-technical-track/8/Empirical-Study-of-Co-Renamed-Identifiers
    Abstract
    The renaming of program identifiers is the most common refactoring operation. Because some identifiers are related to each other, developers may need to rename related identifiers together. Aims: To understand how developers rename multiple identifiers simultaneously, it is necessary to consider the relationships between identifiers in the program and the brief matching for non-identical but semantically similar identifiers. Method: We investigate the relationships between co-renamed identifiers and identify the types of their relationships that contribute to improving the recommendation using more than 1M of renaming instances collected from the histories of open-source software projects. We also evaluate and compare the impact of co-renaming and the relationships between identifiers when inflections occur in the words in identifiers are taken into account. Results: We revealed several relationships of identifiers that are frequently found in the co-renamed identifiers, such as the identifiers of methods in the same class or an identifier defining a variable and another used for initializing the variable, depending on the type of the renamed identifiers. Additionally, the consideration of inflections did not affect the tendency of the relationships. Conclusion: These results suggest an approach that prioritizes the identifiers to be recommended depending on their types and the type of the renamed identifier.
    BibTeX
    @inproceedings{osumi-apsec2022,
        author = {Yuki Osumi and Naotaka Umekawa and Hitomi Komata and Shinpei Hayashi},
        title = {Empirical Study of Co-Renamed Identifiers},
        booktitle = {Proceedings of the 29th Asia-Pacific Software Engineering Conference},
        pages = {71--80},
        year = 2022,
        month = {dec},
    }
    [osumi-apsec2022]: as a page
  7. Keisuke Isemoto, Takashi Kobayashi, Shinpei Hayashi: "Revisiting the Effect of Branch Handling Strategies on Change Recommendation". In Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension (ICPC 2022), Replications and Negative Results (RENE) Track, pp. 162-172. Online, may, 2022.
    ID
    DOI: 10.1145/3524610.3527870
    URL
    https://conf.researchr.org/details/icpc-2022/icpc-2022-rene/4/Revisiting-the-Effect-of-Branch-Handling-Strategies-on-Change-Recommendation
    Abstract
    Although literature has noted the effects of branch handling strategies on change recommendation based on evolutionary coupling, they have been tested in a limited experimental setting. Additionally, the branches characteristics that lead to these effects have not been investigated. In this study, we revisited the investigation conducted by Kovalenko et al. on the effect to change recommendation using two different branch handling strategies: including changesets from commits on a branch and excluding them. In addition to the setting by Kovalenko et al., we introduced another setting to compare: extracting a changeset for a branch from a merge commit at once. We compared the change recommendation results and the similarity of the extracted co-changes to those in the future obtained using two strategies through 30 open-source software systems. The results show that handling commits on a branch separately is often more appropriate in change recommendation, although the comparison in an additional setting resulted in a balanced performance among the branch handling strategies. Additionally, we found that the merge commit size and the branch length positively influence the change recommendation results.
    Slide
    BibTeX
    @inproceedings{k_isemoto-icpc2022,
        author = {Keisuke Isemoto and Takashi Kobayashi and Shinpei Hayashi},
        title = {Revisiting the Effect of Branch Handling Strategies on Change Recommendation},
        booktitle = {Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension},
        pages = {162--172},
        year = 2022,
        month = {may},
    }
    [k_isemoto-icpc2022]: as a page
  8. Lei Chen, Shinpei Hayashi: "Impact of Change Granularity in Refactoring Detection". In Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension (ICPC 2022), Early Research Achievements (ERA) Track, pp. 565-569. Online, may, 2022.
    ID
    DOI: 10.1145/3524610.3528386
    URL
    https://conf.researchr.org/details/icpc-2022/icpc-2022-era/1/Impact-of-Change-Granularity-in-Refactoring-Detection
    Abstract
    Detecting refactorings in commit history is essential to improve the comprehension of code changes in code reviews and to provide valuable information for empirical studies on software evolution. Several techniques have been proposed to detect refactorings accurately at the granularity level of a single commit. However, refactorings may be performed over multiple commits because of code complexity or other real development problems, which is why attempting to detect refactorings at single-commit granularity is insufficient. We observe that some refactorings can be detected only at coarser granularity, that is, changes spread across multiple commits. Herein, this type of refactoring is referred to as coarse-grained refactoring (CGR). We compared the refactorings detected on different granularities of commits from 19 open-source repositories. The results show that CGRs are common, and their frequency increases as the granularity becomes coarser. In addition, we found that Move-related refactorings tended to be the most frequent CGRs. We also analyzed the causes of CGR and suggested that CGRs will be valuable in refactoring research.
    BibTeX
    @inproceedings{chenlei-icpc2022,
        author = {Lei Chen and Shinpei Hayashi},
        title = {Impact of Change Granularity in Refactoring Detection},
        booktitle = {Proceedings of the 30th IEEE/ACM International Conference on Program Comprehension},
        pages = {565--569},
        year = 2022,
        month = {may},
    }
    [chenlei-icpc2022]: as a page
  9. Aoi Takahashi, Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "An Extensive Study on Smell-Aware Bug Localization". 36th IEEE/ACM International Conference on Automated Software Engineering (ASE 2021), Journal-First Papers Track. nov, 2021.
    URL
    https://conf.researchr.org/details/ase-2021/ase-2021-journal-first-papers/4/An-Extensive-Study-on-Smell-Aware-Bug-Localization
    Slide
    BibTeX
    @misc{takahashi-a-at-ase2021,
        author = {Aoi Takahashi and Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {An Extensive Study on Smell-Aware Bug Localization},
        howpublished = {36th IEEE/ACM International Conference on Automated Software Engineering},
        year = 2021,
        month = {nov},
    }
    [takahashi-a-at-ase2021]: as a page
  10. Yoshiki Higo, Shinpei Hayashi, Shinji Kusumoto: "On Tracking Java Methods with Git Mechanisms". 36th IEEE/ACM International Conference on Automated Software Engineering (ASE 2021), Journal-First Papers Track. nov, 2021.
    URL
    https://conf.researchr.org/details/ase-2021/ase-2021-journal-first-papers/2/On-Tracking-Java-Methods-with-Git-Mechanisms
    BibTeX
    @misc{higo-ase2021,
        author = {Yoshiki Higo and Shinpei Hayashi and Shinji Kusumoto},
        title = {On Tracking {Java} Methods with {Git} Mechanisms},
        howpublished = {36th IEEE/ACM International Conference on Automated Software Engineering},
        year = 2021,
        month = {nov},
    }
    [higo-ase2021]: as a page
  11. Mahfouth Alghamdi, Shinpei Hayashi, Takashi Kobayashi, Christoph Treude: "Characterising the Knowledge about Primitive Variables in Java Code Comments". In Proceedings of the 18th IEEE/ACM International Conference on Mining Software Repositories (MSR 2021), pp. 460-470. may, 2021.
    ID
    DOI: 10.1109/MSR52588.2021.00058
    URL
    https://2021.msrconf.org/details/msr-2021-technical-papers/46/Characterising-the-Knowledge-about-Primitive-Variables-in-Java-Code-Comments
    Abstract
    Primitive types are fundamental components available in any programming language, which serve as the building blocks of data manipulation. Understanding the role of these types in source code is essential to write software. The most convenient way to express the functionality of these variables in the code is through describing them in comments. Little work has been conducted on how often these variables are documented in code comments and what types of knowledge the comments provide about variables of primitive types. In this paper, we present an approach for detecting primitive variables and their description in comments using lexical matching and semantic matching. We evaluate our approaches by comparing the lexical and semantic matching performance in terms of recall, precision, and F-score, against 600 manually annotated variables from a sample of GitHub projects. The performance of our semantic approach based on F-score was superior compared to lexical matching, 0.986 and 0.942, respectively. We then create a taxonomy of the types of knowledge contained in these comments about variables of primitive types. Our study showed that developers usually documented the variables’ identifiers of a numeric data type with their purpose (69.16%) and concept (72.75%) more than the variables’ identifiers of type String which were less documented with purpose (61.14%) and concept (55.46%). Our findings characterise the current state of the practice of documenting primitive variables and point at areas that are often not well documented, such as the meaning of boolean variables or the purpose of fields and local variables.
    BibTeX
    @inproceedings{mahfouth-msr2021,
        author = {Mahfouth Alghamdi and Shinpei Hayashi and Takashi Kobayashi and Christoph Treude},
        title = {Characterising the Knowledge about Primitive Variables in {Java} Code Comments},
        booktitle = {Proceedings of the 18th IEEE/ACM International Conference on Mining Software Repositories},
        pages = {460--470},
        year = 2021,
        month = {may},
    }
    [mahfouth-msr2021]: as a page
  12. Ryo Kuramoto, Motoshi Saeki, Shinpei Hayashi: "RefactorHub: A Commit Annotator for Refactoring". In Proceedings of the 29th IEEE/ACM International Conference on Program Comprehension (ICPC 2021), pp. 495-499. may, 2021.
    ID
    DOI: 10.1109/ICPC52881.2021.00058
    URL
    https://conf.researchr.org/details/icpc-2021/icpc-2021-tool-demonstration/2/RefactorHub-A-Commit-Annotator-for-Refactoring
    Abstract
    It is necessary to gather real refactoring instances while conducting empirical studies on refactoring. However, existing refactoring detection approaches are insufficient in terms of their accuracy and coverage. Reducing the manual effort of curating refactoring data is challenging in terms of obtaining various refactoring data accurately. This paper proposes a tool named RefactorHub, which supports users to manually annotate potential refactoring-related commits obtained from existing refactoring detection approaches to make their refactoring information more accurate and complete with rich details. In the proposed approach, the parameters of each refactoring operation are defined as a meaningful set of code elements in the versions before or after refactoring. RefactorHub provides interfaces and supporting features to annotate each parameter, such as the automated filling of dependent parameters, thereby avoiding wrong or uncertain selections. A preliminary user study showed that RefactorHub reduced annotation effort and improved the degree of agreement among users. Source code and demo video are available at https://github.com/salab/RefactorHub
    Slide
    BibTeX
    @inproceedings{kuramoto-icpc2021,
        author = {Ryo Kuramoto and Motoshi Saeki and Shinpei Hayashi},
        title = {{RefactorHub}: A Commit Annotator for Refactoring},
        booktitle = {Proceedings of the 29th IEEE/ACM International Conference on Program Comprehension},
        pages = {495--499},
        year = 2021,
        month = {may},
    }
    [kuramoto-icpc2021]: as a page
  13. Satoshi Yamashita, Shinpei Hayashi, Motoshi Saeki: "ChangeBeadsThreader: An Interactive Environment for Tailoring Automatically Untangled Changes". In Proceedings of the 27th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2020), Tool Demonstration Track, pp. 657-661. London, Canada, feb, 2020.
    ID
    DOI: 10.1109/SANER48275.2020.9054861
    Abstract
    To improve the usability of a revision history, change untangling, which reconstructs the history to ensure that changes in each commit belong to one intentional task, is important. Although there are several untangling approaches based on the clustering of fine-grained editing operations of source code, they often produce unsuitable result for a developer, and manual tailoring of the result is necessary. In this paper, we propose ChangeBeadsThreader (CBT), an interactive environment for splitting and merging change clusters to support the manual tailoring of untangled changes. CBT provides two features: 1) a two-dimensional space where fine-grained change history is visualized to help users find the clusters to be merged and 2) an augmented diff view that enables users to confirm the consistency of the changes in a specific cluster for finding those to be split. These features allow users to easily tailor automatically untangled changes.
    BibTeX
    @inproceedings{yamashita-saner2020,
        author = {Satoshi Yamashita and Shinpei Hayashi and Motoshi Saeki},
        title = {{ChangeBeadsThreader}: An Interactive Environment for Tailoring Automatically Untangled Changes},
        booktitle = {Proceedings of the 27th IEEE International Conference on Software Analysis, Evolution and Reengineering},
        pages = {657--661},
        year = 2020,
        month = {feb},
    }
    [yamashita-saner2020]: as a page
  14. Yutaro Otani, Motoshi Saeki, Shinpei Hayashi: "Toward Automated Refactoring of Clone Groups". Presented at 10th International Workshop on Empirical Software Engineering in Practice (IWESEP 2019). Tokyo, Japan, dec, 2019.
    BibTeX
    @misc{yutarootani-iwesep2019,
        author = {Yutaro Otani and Motoshi Saeki and Shinpei Hayashi},
        title = {Toward Automated Refactoring of Clone Groups},
        howpublished = {Presented at 10th International Workshop on Empirical Software Engineering in Practice},
        year = 2019,
        month = {dec},
    }
    [yutarootani-iwesep2019]: as a page
  15. Satoshi Yamashita, Shinpei Hayashi, Motoshi Saeki: "An Interactive Environment for Tailoring Automatically Untangled Changes". Presented at 10th International Workshop on Empirical Software Engineering in Practice (IWESEP 2019). Tokyo, Japan, dec, 2019.
    BibTeX
    @misc{yamashita-iwesep2019,
        author = {Satoshi Yamashita and Shinpei Hayashi and Motoshi Saeki},
        title = {An Interactive Environment for Tailoring Automatically Untangled Changes},
        howpublished = {Presented at 10th International Workshop on Empirical Software Engineering in Practice},
        year = 2019,
        month = {dec},
    }
    [yamashita-iwesep2019]: as a page
  16. Aoi Takahashi, Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "Investigating Effective Usages of Code Smell Information for Bug Localization". Presented at 10th International Workshop on Empirical Software Engineering in Practice (IWESEP 2019). Tokyo, Japan, dec, 2019.
    BibTeX
    @misc{takahashi-a-at-iwesep2019,
        author = {Aoi Takahashi and Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {Investigating Effective Usages of Code Smell Information for Bug Localization},
        howpublished = {Presented at 10th International Workshop on Empirical Software Engineering in Practice},
        year = 2019,
        month = {dec},
    }
    [takahashi-a-at-iwesep2019]: as a page
  17. Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "Can Automated Impact Analysis Techniques Help Predict Decaying Modules?". In Proceedings of the 35th IEEE International Conference on Software Maintenance and Evolution (ICSME 2019), pp. 541-545. Cleveland, OH, USA, oct, 2019.
    ID
    DOI: 10.1109/ICSME.2019.00088
    Abstract
    A decaying module refers to a module whose quality is getting worse and is likely to become smelly in the future. The concept has been proposed to mitigate the problem that developers cannot track the progression of code smells and prevent them from occurring. To support developers in proactive refactoring process to prevent code smells, a prediction approach has been proposed to detect modules that are likely to become decaying modules in the next milestone. Our prior study has shown that modules that developers will modify as an estimation of developers' context can be used to improve the performance of the prediction model significantly. Nevertheless, it requires the developer who has perfect knowledge of locations of changes to manually specify such information to the system. To this end, in this study, we explore the use of automated impact analysis techniques to estimate the developers' context. Such techniques will enable developers to improve the performance of the decaying module prediction model without the need of perfect knowledge or manual input to the system. Furthermore, we conduct a study on the relationship between the accuracy of an impact analysis technique and its effect on improving decaying module prediction, as well as the future direction that should be explored.
    Slide
    BibTeX
    @inproceedings{natthawute-icsme2019,
        author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {Can Automated Impact Analysis Techniques Help Predict Decaying Modules?},
        booktitle = {Proceedings of the 35th IEEE International Conference on Software Maintenance and Evolution},
        pages = {541--545},
        year = 2019,
        month = {oct},
    }
    [natthawute-icsme2019]: as a page
  18. Yotaro Seki, Shinpei Hayashi, Motoshi Saeki: "Detecting Bad Smells in Use Case Descriptions". In Proceedings of the 27th IEEE International Requirements Engineering Conference (RE'19), pp. 98-108. Jeju Island, South Korea, sep, 2019.
    ID
    DOI: 10.1109/RE.2019.00021
    Abstract
    Use case modeling is very popular to represent the functionality of the system to be developed, and it consists of two parts: use case diagram and use case description. Use case descriptions are written in structured natural language (NL), and the usage of NL can lead to poor descriptions such as ambiguous, inconsistent and/or incomplete descriptions, etc. Poor descriptions lead to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of produced use case models. This paper proposes a technique to automate detecting bad smells of use case descriptions, symptoms of poor descriptions. At first, to clarify bad smells, we analyzed existing use case models to discover poor use case descriptions concretely and developed the list of bad smells, i.e., a catalogue of bad smells. Some of the bad smells can be refined into measures using the Goal-Question-Metric paradigm to automate their detection. The main contribution of this paper is the automated detection of bad smells. We have implemented an automated smell detector for 22 bad smells at first and assessed its usefulness by an experiment. As a result, the first version of our tool got a precision ratio of 0.591 and recall ratio of 0.981.
    BibTeX
    @inproceedings{yotaro-re2019,
        author = {Yotaro Seki and Shinpei Hayashi and Motoshi Saeki},
        title = {Detecting Bad Smells in Use Case Descriptions},
        booktitle = {Proceedings of the 27th IEEE International Requirements Engineering Conference},
        pages = {98--108},
        year = 2019,
        month = {sep},
    }
    [yotaro-re2019]: as a page
  19. Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "Toward Proactive Refactoring: An Exploratory Study on Decaying Modules". In Proceedings of the 3rd International Workshop on Refactoring (IWoR 2019), pp. 39-46. Montreal, Canada, may, 2019.
    ID
    DOI: 10.1109/IWoR.2019.00015
    Abstract
    Source code quality is often measured using code smell, which is an indicator of design flaw or problem in the source code. Code smells can be detected using tools such as static analyzer that detects code smells based on source code metrics. Further, developers perform refactoring activities based on the result of such detection tools to improve source code quality. However, such approach can be considered as reactive refactoring, i.e., developers react to code smells after they occur. This means that developers first suffer the effects of low quality source code (e.g., low readability and understandability) before they start solving code smells. In this study, we focus on proactive refactoring, i.e., refactoring source code before it becomes smelly. This approach would allow developers to maintain source code quality without having to suffer the impact of code smells. To support the proactive refactoring process, we propose a technique to detect decaying modules, which are non-smelly modules that are about to become smelly. We present empirical studies on open source projects with the aim of studying the characteristics of decaying modules. Additionally, to facilitate developers in the refactoring planning process, we perform a study on using a machine learning technique to predict decaying modules and report a factor that contributes most to the performance of the model under consideration.
    BibTeX
    @inproceedings{natthawute-iwor2019,
        author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {Toward Proactive Refactoring: An Exploratory Study on Decaying Modules},
        booktitle = {Proceedings of the 3rd International Workshop on Refactoring},
        pages = {39--46},
        year = 2019,
        month = {may},
    }
    [natthawute-iwor2019]: as a page
  20. Ryosuke Funaki, Shinpei Hayashi, Motoshi Saeki: "The Impact of Systematic Edits in History Slicing". In Proceedings of the 16th International Conference on Mining Software Repositories (MSR 2019), pp. 555-559. Montreal, Canada, may, 2019.
    ID
    DOI: 10.1109/MSR.2019.00083
    Abstract
    While extracting a subset of a commit history, specifying the necessary portion is a time-consuming task for developers. Several commit-based history slicing techniques have been proposed to identify dependencies between commits and to extract a related set of commits using a specific commit as a slicing criterion. However, the resulting subset of commits become large if commits for systematic edits whose changes do not depend on each other exist. We empirically investigated the impact of systematic edits on history slicing. In this study, commits in which systematic edits were detected are split between each file so that unnecessary dependencies between commits are eliminated. In several histories of open source systems, the size of history slices was reduced by 13.3-57.2\% on average after splitting the commits for systematic edits.
    Slide
    BibTeX
    @inproceedings{rfunaki-msr2019,
        author = {Ryosuke Funaki and Shinpei Hayashi and Motoshi Saeki},
        title = {The Impact of Systematic Edits in History Slicing},
        booktitle = {Proceedings of the 16th International Conference on Mining Software Repositories},
        pages = {555--559},
        year = 2019,
        month = {may},
    }
    [rfunaki-msr2019]: as a page
  21. Sarocha Sothornprapakorn, Shinpei Hayashi, Motoshi Saeki: "Visualizing a Tangled Change for Supporting Its Decomposition and Commit Construction". In Proceedings of the 42nd IEEE Computer Software and Applications Conference (COMPSAC 2018), pp. 74-79. Tokyo, Japan, jul, 2018.
    ID
    DOI: 10.1109/COMPSAC.2018.00018
    Abstract
    Developers often save multiple kinds of source code edits into a commit in a version control system, producing a tangled change, which is difficult to understand and revert. However, its separation using an existing sequence-based change representation is tough.We propose a new visualization technique to show the details of a tangled change and align its component edits in a tree structure for expressing multiple groups of changes. Our technique is combined with utilizing refactoring detection and change relevance calculation techniques for constructing the structural tree. Our combination allows us to divide the change into several associations. We have implemented a tool and conducted a controlled experiment with industrial developers to confirm its usefulness and efficiency. Results show that by using our tool with tree visualization, the subjects could understand and decompose tangled changes easier, faster, and higher accuracy than the baseline file list visualization.
    BibTeX
    @inproceedings{sarocha-compsac2018,
        author = {Sarocha Sothornprapakorn and Shinpei Hayashi and Motoshi Saeki},
        title = {Visualizing a Tangled Change for Supporting Its Decomposition and Commit Construction},
        booktitle = {Proceedings of the 42nd IEEE Computer Software and Applications Conference},
        pages = {74--79},
        year = 2018,
        month = {jul},
    }
    [sarocha-compsac2018]: as a page
  22. Aoi Takahashi, Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "A Preliminary Study on Using Code Smells to Improve Bug Localization". In Proceedings of the 26th IEEE/ACM International Conference on Program Comprehension (ICPC 2018), pp. 324-327. Gothenburg, Sweden, may, 2018.
    ID
    DOI: 10.1145/3196321.3196361
    Abstract
    Bug localization is a technique that has been proposed to support the process of identifying the locations of bugs specified in a bug report. A traditional approach such as information retrieval (IR)-based bug localization calculates the similarity between the bug description and the source code and suggests locations that are likely to contain the bug. However, while many approaches have been proposed to improve the accuracy, the likelihood of each module having a bug is often overlooked or they are treated equally, whereas this may not be the case. For example, modules having code smells have been found to be more prone to changes and faults. Therefore, in this paper, we explore a first step toward leveraging code smells to improve bug localization. By combining the code smell severity with the textual similarity from IR-based bug localization, we can identify the modules that are not only similar to the bug description but also have a higher likelihood of containing bugs. Our preliminary evaluation on four open source projects shows that our technique can improve the baseline approach by 142.25\% and 30.50\% on average for method and class levels, respectively.
    BibTeX
    @inproceedings{takahashi-a-at-icpc2018,
        author = {Aoi Takahashi and Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {A Preliminary Study on Using Code Smells to Improve Bug Localization},
        booktitle = {Proceedings of the 26th IEEE/ACM International Conference on Program Comprehension},
        pages = {324--327},
        year = 2018,
        month = {may},
    }
    [takahashi-a-at-icpc2018]: as a page
  23. Katsuhisa Maruyama, Shinpei Hayashi, Takayuki Omori: "ChangeMacroRecorder: Recording Fine-Grained Textual Changes of Source Code". In Proceedings of the 25th IEEE International Conference on Software Analysis, Evolution and Reengineering (SANER 2018), Tool Demonstration Session, pp. 537-541. Campobasso, Italy, mar, 2018.
    ID
    DOI: 10.1109/SANER.2018.8330255
    URL
    https://www.fse.cs.ritsumei.ac.jp/~maru/papers/saner2018-maruyama.pdf
    Abstract
    Recording code changes comes to be well recognized as an effective means for understanding the evolution of existing programs and making their future changes efficient. Although fine-grained textual changes of source code are worth leveraging in various situations, there is no satisfactory tool that records such changes. This paper proposes a yet another tool, called ChangeMacroRecorder, which automatically records all textual changes of source code while a programmer writes and modifies it on the Eclipse's Java editor. Its capability has been improved with respect to both the accuracy of its recording and the convenience for its use. Tool developers can easily and cheaply create their new applications that utilize recorded changes by embedding our proposed recording tool into them.
    Slide
    BibTeX
    @inproceedings{maruyama-saner2018,
        author = {Katsuhisa Maruyama and Shinpei Hayashi and Takayuki Omori},
        title = {ChangeMacroRecorder: Recording Fine-Grained Textual Changes of Source Code},
        booktitle = {Proceedings of the 25th IEEE International Conference on Software Analysis, Evolution and Reengineering},
        pages = {537--541},
        year = 2018,
        month = {mar},
    }
    [maruyama-saner2018]: as a page
  24. Lan Wang, Shinpei Hayashi, Motoshi Saeki: "An Improvement on Data Interoperability with Large-Scale Conceptual Model and Its Application in Industry". In Conceptual Modeling: Research in Progress: Companion Proceedings of the 36th International Conference on Conceptual Modelling (ER 2017), vol. 1979, pp. 249-262. Valencia, Spain, nov, 2017.
    URL
    http://ceur-ws.org/Vol-1979/paper-27.pdf
    Abstract
    In the world of the Internet of Things, heterogeneous systems and devices need to be connected. A key issue for systems and devices is data interoperability such as automatic data exchange and interpretation. A well-known approach to solve the interoperability problem is building a conceptual model (CM). Regarding CM in industrial domains, there are often a large number of entities defined in one CM. How data interoperability with such a large-scale CM can be supported is a critical issue when applying CM into industrial domains. In this paper, evolved from our previous work, a meta-model equipped with new concepts of “PropertyRelationship” and “Category” is proposed, and a tool called FSCM supporting the automatic generation of property relationships and categories is developed. A case study in an industrial domain shows that the proposed approach effectively improves the data interoperability of large-scale CMs.
    BibTeX
    @inproceedings{wlan-er2017,
        author = {Lan Wang and Shinpei Hayashi and Motoshi Saeki},
        title = {An Improvement on Data Interoperability with Large-Scale Conceptual Model and Its Application in Industry},
        booktitle = {Conceptual Modeling: Research in Progress: Companion Proceedings of the 36th International Conference on Conceptual Modelling},
        pages = {249--262},
        year = 2017,
        month = {nov},
    }
    [wlan-er2017]: as a page
  25. Keisuke Asano, Shinpei Hayashi, Motoshi Saeki: "Detecting Bad Smells of Refinement in Goal-Oriented Requirements Analysis". In Proceedings of the 4th International Workshop on Conceptual Modeling in Requirements and Business Analysis (MReBa 2017), co-located with ER 2017, LNCS, vol. 10651, pp. 122-132. Valencia, Spain, nov, 2017.
    ID
    DOI: 10.1007/978-3-319-70625-2_12
    Abstract
    Goal refinement is a crucial step in goal-oriented requirements analysis to create a goal model of high quality. Poor goal refinement leads to missing requirements and eliciting incorrect requirements as well as less comprehensiveness of produced goal models. This paper proposes a technique to automate detecting \textit{bad smells} of goal refinement, symptoms of poor goal refinement. Based on the classification of poor refinement, we defined four types of bad smells of goal refinement and developed two types of measures to detect them: measures on the graph structure of a goal model and semantic similarity of goal descriptions. We have implemented a support tool to detect bad smells and assessed its usefulness by an experiment.
    Slide
    BibTeX
    @inproceedings{k_asano-mreba2017,
        author = {Keisuke Asano and Shinpei Hayashi and Motoshi Saeki},
        title = {Detecting Bad Smells of Refinement in Goal-Oriented Requirements Analysis},
        booktitle = {Proceedings of the 4th International Workshop on Conceptual Modeling in Requirements and Business Analysis},
        pages = {122--132},
        year = 2017,
        month = {nov},
    }
    [k_asano-mreba2017]: as a page
  26. Tomoo Kinoshita, Shinpei Hayashi, Motoshi Saeki: "Goal-Oriented Requirements Analysis Meets a Creativity Technique". In Proceedings of the 4th International Workshop on Conceptual Modeling in Requirements and Business Analysis (MReBa 2017), co-located with ER 2017, LNCS, vol. 10651, pp. 101-110. Valencia, Spain, nov, 2017.
    ID
    DOI: 10.1007/978-3-319-70625-2_10
    Abstract
    Goal-oriented requirements analysis (GORA) has been growing in the area of requirement engineering. It is one of the approaches that elicits and analyzes stakeholders’ requirements as goals to be achieved, and develops an AND-OR graph, called a goal graph, as a result of requirements elicitation. However, although it is important to involve stakeholders’ ideas and viewpoints during requirements elicitation, GORA still has a problem that their processes lack the deeper participation of stakeholders. Regarding stakeholders’ participation, creativity techniques have also become popular in requirements engineering. They aim to create novel and appropriate requirements by involving stakeholders. One of these techniques, the KJ-method is a method which organizes and associates novel ideas generated by Brainstorming. In this paper, we present an approach to support stakeholders’ participation during GORA processes by transforming an affinity diagrams of the KJ-method, into a goal graph, including transformation guidelines, and also apply our approach to an example.
    BibTeX
    @inproceedings{kinoshita-mreba2017,
        author = {Tomoo Kinoshita and Shinpei Hayashi and Motoshi Saeki},
        title = {Goal-Oriented Requirements Analysis Meets a Creativity Technique},
        booktitle = {Proceedings of the 4th International Workshop on Conceptual Modeling in Requirements and Business Analysis},
        pages = {101--110},
        year = 2017,
        month = {nov},
    }
    [kinoshita-mreba2017]: as a page
  27. Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "How Do Developers Select and Prioritize Code Smells? A Preliminary Study". In Proceedings of the 33rd IEEE International Conference on Software Maintenance and Evolution (ICSME 2017), pp. 484-488. Shanghai, China, sep, 2017.
    ID
    DOI: 10.1109/ICSME.2017.66
    Abstract
    Code smells are considered to be indicators of design flaws or problems in source code. Various tools and techniques have been proposed for detecting code smells. The number of code smells detected by these tools is generally large, so approaches have also been developed for prioritizing and filtering code smells. However, the lack of empirical data regarding how developers select and prioritize code smells hinders improvements to these approaches. In this study, we investigated professional developers to determine the factors they use for selecting and prioritizing code smells. We found that \textit{Task relevance} and \textit{Smell severity} were most commonly considered during code smell selection, while \textit{Module importance} and \textit{Task relevance} were employed most often for code smell prioritization. These results may facilitate further research into code smell detection, prioritization, and filtration to better focus on the actual needs of developers.
    BibTeX
    @inproceedings{natthawute-icsme2017,
        author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {How Do Developers Select and Prioritize Code Smells? A Preliminary Study},
        booktitle = {Proceedings of the 33rd IEEE International Conference on Software Maintenance and Evolution},
        pages = {484--488},
        year = 2017,
        month = {sep},
    }
    [natthawute-icsme2017]: as a page
  28. Maaki Nakano, Kunihiro Noda, Shinpei Hayashi, Takashi Kobayashi: "Mediating Turf Battles! Prioritizing Shared Modules in Locating Multiple Features". In Proceedings of the 41st IEEE Computer Society Signature Conference on Computers, Software and Applications (COMPSAC 2017), pp. 363-368. Torino, Italy, jul, 2017.
    ID
    DOI: 10.1109/COMPSAC.2017.167
    Abstract
    Dynamic feature location techniques (DFLTs), which use execution profiles of scenarios that trigger a feature, are a promising approach to locating features in the source code. A sufficient set of scenarios is key to obtaining highly accurate results; however, its preparation is laborious and difficult in practice. In most cases, a scenario exercises not only the desired feature but also other features. We focus on the relationship between a module and multiple features that can be calculated with no extra scenarios, to improve the accuracy of locating the desired feature in the source code. In this paper, we propose a DFLT using the odds ratios of the multiple relationships between modules and features. We use the similarity coefficient, which is used in fault localization techniques, as a relationship. Our DFLT better orders shared modules compared with an existing DFLT. To reduce developer costs in our DFLT, we also propose a filtering technique that uses formal concept analysis. We evaluate our DFLT on the features of an open source software project with respect to developer costs and show that our DFLT outperforms the existing approach; the average cost of our DFLT is almost half that of the state-of-the-art DFLT.
    BibTeX
    @inproceedings{maaki-compsac2017,
        author = {Maaki Nakano and Kunihiro Noda and Shinpei Hayashi and Takashi Kobayashi},
        title = {Mediating Turf Battles! Prioritizing Shared Modules in Locating Multiple Features},
        booktitle = {Proceedings of the 41st IEEE Computer Society Signature Conference on Computers, Software and Applications},
        pages = {363--368},
        year = 2017,
        month = {jul},
    }
    [maaki-compsac2017]: as a page
  29. Shinpei Hayashi, Fumiki Minami, Motoshi Saeki: "Inference-Based Detection of Architectural Violations in MVC2". In Proceedings of the 12th International Conference on Software Technologies (ICSOFT 2017), pp. 394-401. Madrid, Spain, jul, 2017.
    ID
    DOI: 10.5220/0006465103940401
    Abstract
    Utilizing software architecture patterns is important for reducing maintenance costs. However, maintaining code according to the constraints defined by the architecture patterns is time-consuming work. As described herein, we propose a technique to detect code fragments that are incompliant to the architecture as fine-grained architectural violations. For this technique, the dependence graph among code fragments extracted from the source code and the inference rules according to the architecture are the inputs. A set of candidate components to which a code fragment can be affiliated is attached to each node of the graph and is updated step-by-step. The inference rules express the components' responsibilities and dependency constraints. They remove candidate components of each node that do not satisfy the constraints from the current estimated state of the surrounding code fragment. If the current result does not include the current component, then it is detected as a violation. By defining inference rules for MVC2 architecture and applying the technique to web applications using Play Framework, we obtained accurate detection results.
    Slide
    BibTeX
    @inproceedings{hayashi-icsoft2017,
        author = {Shinpei Hayashi and Fumiki Minami and Motoshi Saeki},
        title = {Inference-Based Detection of Architectural Violations in MVC2},
        booktitle = {Proceedings of the 12th International Conference on Software Technologies},
        pages = {394--401},
        year = 2017,
        month = {jul},
    }
    [hayashi-icsoft2017]: as a page
  30. Yu Negishi, Shinpei Hayashi, Motoshi Saeki: "Establishing Regulatory Compliance in Goal-Oriented Requirements Analysis". In Proceedings of the 19th IEEE Conference on Business Informatics (CBI 2017), pp. 434-443. Thessaloniki, Greece, jul, 2017.
    ID
    DOI: 10.1109/CBI.2017.49
    Abstract
    To develop with lower costs information systems that do not violate regulations, it is necessary to elicit requirements compliant to the regulations. Automated supports allow us to avoid missing requirements necessary to comply with regulations and to exclude functional requirements against the regulations. In this paper, we propose a technique to detect goals relevant to regulations in a goal model and to add goals so that the resulting goal model can be compliant to the regulations. In this approach, we obtain the goals relevant to regulations by semantically matching goal descriptions to regulatory statements. We use Case Grammar approach to deal with the meaning of goal descriptions and regulatory statements, i.e., both are transformed to case frames as their semantic representations, and we check if their case frames can be unified. After detecting the relevant goals, based on the modality of matched regulatory statements, new goals to realize the compliance to regulatory statements are added to the goal model. We made case studies and had a result that 93\% of regulatory violations could be corrected.
    Slide
    BibTeX
    @inproceedings{negishi-cbi2017,
        author = {Yu Negishi and Shinpei Hayashi and Motoshi Saeki},
        title = {Establishing Regulatory Compliance in Goal-Oriented Requirements Analysis},
        booktitle = {Proceedings of the 19th IEEE Conference on Business Informatics},
        pages = {434--443},
        year = 2017,
        month = {jul},
    }
    [negishi-cbi2017]: as a page
  31. Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "Revisiting Context-Based Code Smells Prioritization: On Supporting Referred Context". In Proceedings of the XP 2017 Scientific Workshops (MTD 2017), co-located with XP 2017, no. 3, pp. 1-5. Cologne, Germany, may, 2017.
    ID
    DOI: 10.1145/3120459.3120463
    Abstract
    Because numerous code smells are revealed by code smell detectors, many attempts have been undertaken to mitigate related problems by prioritizing and filtering code smells. We earlier proposed a technique to prioritize code smells by leveraging the context of the developers, i.e., the modules that the developers plan to implement. Our empirical studies revealed that the results of code smells prioritized using our technique are useful to support developers' implementation on the modules they intend to change. Nonetheless, in software change processes, developers often navigate through many modules and refer to them before making actual changes. Such modules are important when considering the developers' context. Therefore, it is essential to ascertain whether our technique can also support developers on modules to which they are going to refer to make changes. We conducted an empirical study of an open source project adopting tools for recording developers' interaction history. Our results demonstrate that the code smells prioritized using our approach can also be used to support developers for modules to which developers are going to refer, irrespective of the need for modification.
    BibTeX
    @inproceedings{natthawute-mtd2017,
        author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {Revisiting Context-Based Code Smells Prioritization: On Supporting Referred Context},
        booktitle = {Proceedings of the XP 2017 Scientific Workshops},
        pages = {1--5},
        year = 2017,
        month = {may},
    }
    [natthawute-mtd2017]: as a page
  32. Katsuhisa Maruyama, Shinpei Hayashi: "A Tool Supporting Postponable Refactoring". In Proceedings of the 39th International Conference on Software Engineering (ICSE 2017), Poster Session, pp. 133-135. Buenos Aires, Argentina, may, 2017.
    ID
    DOI: 10.1109/ICSE-C.2017.108
    Abstract
    Failures of precondition checking when attempting to apply automated refactorings often discourage programmers from attempting to use these refactorings in the future. To alleviate this situation, the postponement of the failed refactoring instead its cancellation is beneficial. This poster paper proposes a new concept of postponable refactoring and a prototype tool that implements postponable Extract Method as an Eclipse plug-in. We believe that this refactoring tool inspires a new field of reconciliation automated and manual refactoring.
    BibTeX
    @inproceedings{maruyama-icse2017,
        author = {Katsuhisa Maruyama and Shinpei Hayashi},
        title = {A Tool Supporting Postponable Refactoring},
        booktitle = {Proceedings of the 39th International Conference on Software Engineering},
        pages = {133--135},
        year = 2017,
        month = {may},
    }
    [maruyama-icse2017]: as a page
  33. Shoichiro Ito, Shinpei Hayashi, Motoshi Saeki: "How Can You Improve Your As-is Models? Requirements Analysis Methods Meet GQM". In Proceedings of the 23rd Working Conference on Requirements Engineering: Foundation for Software Quality (REFSQ 2017), pp. 95-111. Essen, Germany, feb, 2017.
    ID
    DOI: 10.1007/978-3-319-54045-0_8
    Abstract
    [Context & motivation] To develop information systems providing high business value, we should clarify As-is business processes and information systems supporting them, identify the problems hidden in them, and develop To-be information systems so that the identified problems can be solved. [Question/problem] In this development, we need a technique to support the identification of the problems, which can be seamlessly connected to the modeling techniques. [Principal ideas/results] In this paper, to define metrics to extract problems of the As-is system, following the domains specific to it, we propose the combination of Goal-Question-Metric (GQM) with existing requirements analysis techniques. Furthermore, we integrate goal-oriented requirements analysis (GORA) with problem frames approach and use case modeling to define the metrics of measuring the problematic efforts of human actors in the As-is models. This paper includes a case study of a reporting operation process at a brokerage office to check the feasibility of our approach. [Contribution] Our contribution is the proposal of using of GQM to identify the problems of an As-is model specified with the combination of GORA, use case modeling, and problem frames.
    Slide
    BibTeX
    @inproceedings{ito-refsq2017,
        author = {Shoichiro Ito and Shinpei Hayashi and Motoshi Saeki},
        title = {How Can You Improve Your As-is Models? Requirements Analysis Methods Meet GQM},
        booktitle = {Proceedings of the 23rd Working Conference on Requirements Engineering: Foundation for Software Quality},
        pages = {95--111},
        year = 2017,
        month = {feb},
    }
    [ito-refsq2017]: as a page
  34. Katsuhisa Maruyama, Shinpei Hayashi, Norihiro Yoshida, Eunjong Choi: "Frame-Based Behavior Preservation in Refactoring". In Proceedings of the 24th IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER 2017), Poster Session, pp. 573-574. Klagenfurt, Austria, feb, 2017.
    ID
    DOI: 10.1109/SANER.2017.7884683
    Abstract
    Behavior preservation often bothers programmers in refactoring. This poster paper proposes a new approach that tames the behavior preservation by introducing the concept of a frame. A frame in refactoring defines stakeholder's individual concerns about the refactored code. Frame-based refactoring preserves the observable behavior within a particular frame. Therefore, it helps programmers distinguish the behavioral changes that they should observe from those that they can ignore.
    BibTeX
    @inproceedings{maruyama-saner2017,
        author = {Katsuhisa Maruyama and Shinpei Hayashi and Norihiro Yoshida and Eunjong Choi},
        title = {Frame-Based Behavior Preservation in Refactoring},
        booktitle = {Proceedings of the 24th IEEE International Conference on Software Analysis, Evolution, and Reengineering},
        pages = {573--574},
        year = 2017,
        month = {feb},
    }
    [maruyama-saner2017]: as a page
  35. Shinpei Hayashi, Hiroshi Kazato, Takashi Kobayashi, Tsuyoshi Oshima, Katsuyuki Natsukawa, Takashi Hoshino, Motoshi Saeki: "Guiding Identification of Missing Scenarios for Dynamic Feature Location". In Proceedings of the 23rd Asia-Pacific Software Engineering Conference (APSEC 2016), pp. 393-396. Hamilton, New Zealand, dec, 2016.
    ID
    DOI: 10.1109/APSEC.2016.068
    Abstract
    Feature location (FL) is an important activity for finding correspondence between software features and modules in source code. Although dynamic FL techniques are effective, the quality of their results depends on analysts to prepare sufficient scenarios for exercising the features. In this paper, we propose a technique for guiding identification of missing scenarios using the prior FL result. After applying FL, unexplored call dependencies are extracted by comparing the results of static and dynamic analyses, and analysts are advised to investigate them for finding missing scenarios. We propose several metrics that measure the potential impact of unexplored dependencies to help analysts sort out them. Through a preliminary evaluation using an example web application, we showed our technique was effective for recommending the clues to find missing scenarios.
    Slide
    BibTeX
    @inproceedings{hayashi-apsec2016,
        author = {Shinpei Hayashi and Hiroshi Kazato and Takashi Kobayashi and Tsuyoshi Oshima and Katsuyuki Natsukawa and Takashi Hoshino and Motoshi Saeki},
        title = {Guiding Identification of Missing Scenarios for Dynamic Feature Location},
        booktitle = {Proceedings of the 23rd Asia-Pacific Software Engineering Conference},
        pages = {393--396},
        year = 2016,
        month = {dec},
    }
    [hayashi-apsec2016]: as a page
  36. Tomoo Kinoshita, Shinpei Hayashi: "How Do We Use Goal-Oriented Requirements Analysis in Interviews with Stakeholders?: An Approach to Transforming Affinity Diagrams into Goal Graphs". The 35th International Conference on Conceptual Modeling (ER 2016), Poster Session. Gifu, Japan, nov, 2016.
    BibTeX
    @misc{kinoshita-er2016,
        author = {Tomoo Kinoshita and Shinpei Hayashi},
        title = {How Do We Use Goal-Oriented Requirements Analysis in Interviews with Stakeholders?: An Approach to Transforming Affinity Diagrams into Goal Graphs},
        howpublished = {The 35th International Conference on Conceptual Modeling},
        year = 2016,
        month = {nov},
    }
    [kinoshita-er2016]: as a page
  37. Keisuke Asano, Shinpei Hayashi: "Toward Detecting Inappropriate Goal Refinements in a Goal Model". The 35th International Conference on Conceptual Modeling (ER 2016), Poster Session. Gifu, Japan, nov, 2016.
    BibTeX
    @misc{k_asano-er2016,
        author = {Keisuke Asano and Shinpei Hayashi},
        title = {Toward Detecting Inappropriate Goal Refinements in a Goal Model},
        howpublished = {The 35th International Conference on Conceptual Modeling},
        year = 2016,
        month = {nov},
    }
    [k_asano-er2016]: as a page
  38. Lan Wang, Shinpei Hayashi: "How to Keep System Consistency via Meta-Model-Based Traceability Rules?". The 35th International Conference on Conceptual Modeling (ER 2016), Poster Session. Gifu, Japan, nov, 2016.
    BibTeX
    @misc{wlan-er2016,
        author = {Lan Wang and Shinpei Hayashi},
        title = {How to Keep System Consistency via Meta-Model-Based Traceability Rules?},
        howpublished = {The 35th International Conference on Conceptual Modeling},
        year = 2016,
        month = {nov},
    }
    [wlan-er2016]: as a page
  39. Haruhiko Kaiya, Shinpei Ogata, Shinpei Hayashi, Motoshi Saeki: "Early Requirements Analysis for a Socio-Technical System based on Goal Dependencies". In Proceedings of the 15th International Conference on Intelligent Software Methodologies, Tools and Techniques (SOMET 2016), pp. 125-138. Larnaca, Cyprus, sep, 2016. recieved Best Paper Award.
    ID
    DOI: 10.3233/978-1-61499-674-3-125
    Abstract
    A socio-technical system (STS) consists of many different actors such as people, organizations, software applications and infrastructures. We call actors except both people and organizations machines. Machines should be carefully introduced into the STS because the machines are beneficial to some people or organization but harmful to others. We thus propose a goal-oriented requirements modelling language called GDMA based on i* so that machines with the following characteristics can be systematically specified. First, machines make the goals of each people be achieved more and better than ever. Second, machines make people achieve goals fewer and easier than ever. We also propose analysis techniques of GDMA to judge whether or not the introduction of machines are appropriate or not. Several machines are introduced into an as-is model of GDMA locally with the help of model transformation techniques. Then, such an introduction is evaluated globally on the basis of metrics derived from the model structure. We confirmed that GDMA could evaluate the successful and failure of existing projects.
    BibTeX
    @inproceedings{kaiya-somet2016,
        author = {Haruhiko Kaiya and Shinpei Ogata and Shinpei Hayashi and Motoshi Saeki},
        title = {Early Requirements Analysis for a Socio-Technical System based on Goal Dependencies},
        booktitle = {Proceedings of the 15th International Conference on Intelligent Software Methodologies, Tools and Techniques},
        pages = {125--138},
        year = 2016,
        month = {sep},
    }
    [kaiya-somet2016]: as a page
  40. Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "Context-Based Code Smells Prioritization for Prefactoring". In Proceedings of the 24th International Conference on Program Comprehension (ICPC 2016), pp. 1-10. Austin, Texas, USA, may, 2016.
    ID
    DOI: 10.1109/ICPC.2016.7503705
    Abstract
    To find opportunities for applying prefactoring, several techniques for detecting bad smells in source code have been proposed. Existing smell detectors are often unsuitable for developers who have a specific context because these detectors do not consider their current context and output the results that are mixed with both smells that are and are not related to such context. Consequently, the developers must spend a considerable amount of time identifying relevant smells. As described in this paper, we propose a technique to prioritize bad code smells using developers' context. The explicit data of the context are obtained using a list of issues extracted from an issue tracking system. We applied impact analysis to the list of issues and used the results to specify which smells are associated with the context. Consequently, our approach can provide developers with a list of prioritized bad code smells related to their current context. Several evaluations using open source projects demonstrate the effectiveness of our technique.
    BibTeX
    @inproceedings{natthawute-icpc2016,
        author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {Context-Based Code Smells Prioritization for Prefactoring},
        booktitle = {Proceedings of the 24th International Conference on Program Comprehension},
        pages = {1--10},
        year = 2016,
        month = {may},
    }
    [natthawute-icpc2016]: as a page
  41. Haruhiko Kaiya, Shinpei Ogata, Shinpei Hayashi, Motoshi Saeki, Takao Okubo, Nobukazu Yoshioka, Hironori Washizaki, Atsuo Hazeyama: "Finding Potential Threats in Several Security Targets for Eliciting Security Requirements". In Proceedings of the 10th International Multi-Conference on Computing in the Global Information Technology (ICCGI 2015), pp. 83-92. St. Julians, Malta, oct, 2015.
    URL
    https://www.thinkmind.org/download.php?articleid=iccgi_2015_4_10_10050
    Abstract
    Threats to existing systems help requirements analysts to elicit security requirements for a new system similar to such systems because security requirements specify how to protect the system against threats and similar systems require similar means for protection. We propose a method of finding potential threats that can be used for eliciting security requirements for such a system. The method enables analysts to find additional security requirements when they have already elicited one or a few threats. The potential threats are derived from several security targets (STs) in the Common Criteria. An ST contains knowledge related to security requirements such as threats and objectives. It also contains their explicit relationships. In addition, individual objectives are explicitly related to the set of means for protection, which are commonly used in any STs. Because we focus on such means to find potential threats, our method can be applied to STs written in any languages, such as English or French. We applied and evaluated our method to three different domains. In our evaluation, we enumerated all threat pairs in each domain. We then predicted whether a threat and another in each pair respectively threaten the same requirement according to the method. The recall of the prediction was more than 70% and the precision was 20 to 40% in three domains.
    BibTeX
    @inproceedings{kaiya-iccgi2015,
        author = {Haruhiko Kaiya and Shinpei Ogata and Shinpei Hayashi and Motoshi Saeki and Takao Okubo and Nobukazu Yoshioka and Hironori Washizaki and Atsuo Hazeyama},
        title = {Finding Potential Threats in Several Security Targets for Eliciting Security Requirements},
        booktitle = {Proceedings of the 10th International Multi-Conference on Computing in the Global Information Technology},
        pages = {83--92},
        year = 2015,
        month = {oct},
    }
    [kaiya-iccgi2015]: as a page
  42. Tatsuya Abe, Shinpei Hayashi, Motoshi Saeki: "Modeling and Utilizing Security Knowledge for Eliciting Security Requirements". In Proceedings of the 2nd International Workshop on Conceptual Modeling in Requirements and Business Analysis (MReBa 2015), co-located with ER 2015, pp. 236-247. Stockholm, Sweden, oct, 2015.
    ID
    DOI: 10.1007/978-3-319-25747-1_24
    Abstract
    In order to develop secure information systems with less development cost, it is important to elicit the requirements to security functions (simply security requirements) as early in their development process as possible. To achieve it, accumulated knowledge of threats and their objectives obtained from practical experiences is useful, and the technique to support the elicitation of security requirements utilizing this knowledge should be developed. In this paper, we present the technique for security requirements elicitation using practical knowledge of threats, their objectives and security functions realizing the objectives, which is extracted from Security Target documents compliant to the standard Common Criteria. We show the usefulness of our approach with several case studies.
    Slide
    BibTeX
    @inproceedings{abe-mreba2015,
        author = {Tatsuya Abe and Shinpei Hayashi and Motoshi Saeki},
        title = {Modeling and Utilizing Security Knowledge for Eliciting Security Requirements},
        booktitle = {Proceedings of the 2nd International Workshop on Conceptual Modeling in Requirements and Business Analysis},
        pages = {236--247},
        year = 2015,
        month = {oct},
    }
    [abe-mreba2015]: as a page
  43. Ryotaro Nakamura, Yu Negishi, Shinpei Hayashi, Motoshi Saeki: "Terminology Matching of Requirements Specification Documents and Regulations for Consistency Checking". In Proceedings of the 8th International Workshop on Requirements Engineering and Law (RELAW 2015), co-located with RE'15, pp. 10-18. Ottawa, Canada, aug, 2015.
    ID
    DOI: 10.1109/RELAW.2015.7330206
    Abstract
    To check the consistency between requirements specification documents and regulations by using a model checking technique, requirements analysts generate inputs to the model checker, i.e., state transition machines from the documents and logical formulas from the regulatory statements to be verified as properties. During these generation processes, to make the logical formulas semantically correspond to the state transition machine, analysts should take terminology matching where they look for the words in the requirements document having the same meaning as the words in the regulatory statements and unify the semantically same words. In this paper, by using case grammar approach, we propose an automated technique to reason the meaning of words in requirements specification documents by means of cooccurrence constraints on words in case frames, and to generate from regulatory statements the logical formulas where the words are unified to the words of the requirements documents. We have a feasibility study of our proposal with two case studies.
    Slide
    BibTeX
    @inproceedings{nakamura-relaw2015,
        author = {Ryotaro Nakamura and Yu Negishi and Shinpei Hayashi and Motoshi Saeki},
        title = {Terminology Matching of Requirements Specification Documents and Regulations for Consistency Checking},
        booktitle = {Proceedings of the 8th International Workshop on Requirements Engineering and Law},
        pages = {10--18},
        year = 2015,
        month = {aug},
    }
    [nakamura-relaw2015]: as a page
  44. Jumpei Matsuda, Shinpei Hayashi, Motoshi Saeki: "Hierarchical Categorization of Edit Operations for Separately Committing Large Refactoring Results". In Proceedings of the 14th International Workshop on Principles of Software Evolution (IWPSE 2015), co-located with ESEC/FSE 2015, pp. 19-27. Bergamo, Italy, aug, 2015.
    ID
    DOI: 10.1145/2804360.2804363
    Abstract
    In software configuration management using a version control system, developers have to follow the commit policy of the project. However, preparing changes according to the policy are sometimes cumbersome and time-consuming, in particular when applying large refactoring consisting of multiple primitive refactoring instances. In this paper, we propose a technique for re-organizing changes by recording editing operations of source code. Editing operations including refactoring operations are hierarchically managed based on their types provided by an integrated development environment. Using the obtained hierarchy, developers can easily configure the granularity of changes and obtain the resulting changes based on the configured granularity. We confirmed the feasibility of the technique by applying it to the recorded changes in a large refactoring process.
    BibTeX
    @inproceedings{jmatsu-iwpse2015,
        author = {Jumpei Matsuda and Shinpei Hayashi and Motoshi Saeki},
        title = {Hierarchical Categorization of Edit Operations for Separately Committing Large Refactoring Results},
        booktitle = {Proceedings of the 14th International Workshop on Principles of Software Evolution},
        pages = {19--27},
        year = 2015,
        month = {aug},
    }
    [jmatsu-iwpse2015]: as a page
  45. Wataru Inoue, Shinpei Hayashi, Haruhiko Kaiya, Motoshi Saeki: "Multi-Dimensional Goal Refinement in Goal-Oriented Requirements Engineering". In Proceedings of the 10th International Conference on Software Engineering and Applications (ICSOFT-EA 2015), pp. 185-195. Colmar, Alsace, France, jul, 2015.
    ID
    DOI: 10.5220/0005499301850195
    Abstract
    In this paper, we propose a multi-dimensional extension of goal graphs in goal-oriented requirements engineering in order to support the understanding the relations between goals, i.e., goal refinements. Goals specify multiple concerns such as functions, strategies, and non-functions, and they are refined into sub goals from mixed views of these concerns. This intermixture of concerns in goals makes it difficult for a requirements analyst to understand and maintain goal graphs. In our approach, a goal graph is put in a multi-dimensional space, a concern corresponds to a coordinate axis in this space, and goals are refined into sub goals referring to the coordinates. Thus, the meaning of a goal refinement is explicitly provided by means of the coordinates used for the refinement. By tracing and focusing on the coordinates of goals, requirements analysts can understand goal refinements and modify unsuitable ones. We have developed a supporting tool and made an exploratory experiment to evaluate the usefulness of our approach.
    BibTeX
    @inproceedings{inouew-icsoft2015,
        author = {Wataru Inoue and Shinpei Hayashi and Haruhiko Kaiya and Motoshi Saeki},
        title = {Multi-Dimensional Goal Refinement in Goal-Oriented Requirements Engineering},
        booktitle = {Proceedings of the 10th International Conference on Software Engineering and Applications},
        pages = {185--195},
        year = 2015,
        month = {jul},
    }
    [inouew-icsoft2015]: as a page
  46. Yoshiki Higo, Akio Ohtani, Shinpei Hayashi, Hideaki Hata, Shinji Kusumoto: "Toward Reusing Code Changes". In Proceedings of the 12th Working Conference on Mining Software Repositories (MSR 2015), pp. 372-376. Florence, Italy, may, 2015.
    ID
    DOI: 10.1109/MSR.2015.43
    Abstract
    Existing techniques have succeeded to help developers implement new code. However, they are insufficient to help to change existing code. Previous studies have proposed techniques to support bug fixes but other kinds of code changes such as function enhancements and refactorings are not supported by them. In this paper, we propose a novel system that helps developers change existing code. Unlike existing techniques, our system can support any kinds of code changes if similar code changes occurred in the past. Our research is still on very early stage and we have not have any implementation or any prototype yet. This paper introduces our research purpose, an outline of our system, and how our system is different from existing techniques.
    BibTeX
    @inproceedings{higo-msr2015,
        author = {Yoshiki Higo and Akio Ohtani and Shinpei Hayashi and Hideaki Hata and Shinji Kusumoto},
        title = {Toward Reusing Code Changes},
        booktitle = {Proceedings of the 12th Working Conference on Mining Software Repositories},
        pages = {372--376},
        year = 2015,
        month = {may},
    }
    [higo-msr2015]: as a page
  47. Shinpei Hayashi, Daiki Hoshino, Jumpei Matsuda, Motoshi Saeki, Takayuki Omori, Katsuhisa Maruyama: "Historef: A Tool for Edit History Refactoring". In Proceedings of the 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER 2015), Tool Demo Track, pp. 469-473. Montréal, Canada, mar, 2015.
    ID
    DOI: 10.1109/SANER.2015.7081858
    Abstract
    This paper presents Historef, a tool for automatin edit history refactoring on Eclipse IDE for Java programs. The aim of our history refactorings is to improve the understandability and/or usability of the history without changing its whole effect. Historef enables us to apply history refactorings to the recorded edit history in the middle of the source code editing process by a developer. By using our integrated tool, developers can commit the refactored edits into underlying SCM repository after applying edit history refactorings so that they are easy to manage their changes based on the performed edits.
    Slide
    BibTeX
    @inproceedings{hayashi-saner2015,
        author = {Shinpei Hayashi and Daiki Hoshino and Jumpei Matsuda and Motoshi Saeki and Takayuki Omori and Katsuhisa Maruyama},
        title = {Historef: A Tool for Edit History Refactoring},
        booktitle = {Proceedings of the 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering},
        pages = {469--473},
        year = 2015,
        month = {mar},
    }
    [hayashi-saner2015]: as a page
  48. Shinpei Hayashi, Takuto Yanagida, Motoshi Saeki, Hidenori Mimura: "Class Responsibility Assignment as Fuzzy Constraint Satisfaction". In Proceedings of the 6th International Workshop on Empirical Software Engineering in Practice (IWESEP 2014), pp. 19-24. Osaka, Japan, nov, 2014.
    ID
    DOI: 10.1109/IWESEP.2014.13
    Abstract
    We formulate the class responsibility assignment (CRA) problem as the fuzzy constraint satisfaction problem (FCSP) for automating CRA of high quality. Responsibilities are contracts or obligations of objects that they should assume, by aligning them to classes appropriately, quality designs realize. Typical conditions of a desirable design are having a low coupling between highly cohesive classes. However, because of a trade-off among such conditions, solutions that satisfy the conditions moderately are desired, and computer assistance is needed. Additionally, if we have an initial assignment, the improved one by our technique should keep the original assignment as much as possible because it involves with the intention of human designers. We represent such conditions as fuzzy constraints, and formulate CRA as FCSP. That enables us to apply common FCSP solvers to the problem and to derive solution representing a CRA. The conducted preliminary evaluation indicates the effectiveness of our technique.
    Slide
    BibTeX
    @inproceedings{hayashi-iwesep2014,
        author = {Shinpei Hayashi and Takuto Yanagida and Motoshi Saeki and Hidenori Mimura},
        title = {Class Responsibility Assignment as Fuzzy Constraint Satisfaction},
        booktitle = {Proceedings of the 6th International Workshop on Empirical Software Engineering in Practice},
        pages = {19--24},
        year = 2014,
        month = {nov},
    }
    [hayashi-iwesep2014]: as a page
  49. Shinpei Hayashi, Takashi Ishio, Hiroshi Kazato, Tsuyoshi Oshima: "Toward Understanding How Developers Recognize Features in Source Code from Descriptions". In Proceedings of the 9th International Workshop on Advanced Modularization Techniques (AOAsia/Pacific 2014), co-located with FSE 2014, pp. 1-3. Hong Kong, China, nov, 2014.
    ID
    DOI: 10.1145/2666358.2666578
    Abstract
    A basic clue of feature location available to developers is a description of a feature written in a natural language. However, a description of a feature does not clearly specify the boundary of the feature, while developers tend to locate the feature precisely by excluding marginal modules that are likely outside of the boundary. This paper addresses a question: does a clearer description of a feature enable developers to recognize the same sets of modules as relevant to the feature? Based on the conducted experiment with subjects, we conclude that different descriptions lead to a different set of modules.
    Slide
    BibTeX
    @inproceedings{hayashi-aoasia2014,
        author = {Shinpei Hayashi and Takashi Ishio and Hiroshi Kazato and Tsuyoshi Oshima},
        title = {Toward Understanding How Developers Recognize Features in Source Code from Descriptions},
        booktitle = {Proceedings of the 9th International Workshop on Advanced Modularization Techniques},
        pages = {1--3},
        year = 2014,
        month = {nov},
    }
    [hayashi-aoasia2014]: as a page
  50. Katsuhisa Maruyama, Takayuki Omori, Shinpei Hayashi: "A Visualization Tool Recording Historical Data of Program Comprehension Tasks". In Proceedings of the 22nd International Conference on Program Comprehension (ICPC 2014), Tool Demo Track, pp. 207-211. Hyderabad, India, jun, 2014.
    ID
    DOI: 10.1145/2597008.2597802
    Abstract
    Software visualization has become a major technique in program comprehension. Although many tools visualize the structure, behavior, and evolution of a program, they have no concern with how a tool user has understood it. Moreover, they miss the stuff the user has left through trial-and-error processes of his/her program comprehension task. This paper presents a source code visualization tool called CodeForest. It uses a forest metaphor to depict source code of Java programs. Each tree represents a class within the program and the collection of trees constitutes a three-dimensional forest. CodeForest helps a user to try a large number of combinations of mapping of software metrics on visual parameters. Moreover, it provides two new types of support: leaving notes that memorize the current understanding and insight along with visualized objects, and automatically recording a user's actions under understanding. The left notes and recorded actions might be used as historical data that would be hints accelerating the current comprehension task.
    BibTeX
    @inproceedings{maruyama-icpc2014,
        author = {Katsuhisa Maruyama and Takayuki Omori and Shinpei Hayashi},
        title = {A Visualization Tool Recording Historical Data of Program Comprehension Tasks},
        booktitle = {Proceedings of the 22nd International Conference on Program Comprehension},
        pages = {207--211},
        year = 2014,
        month = {jun},
    }
    [maruyama-icpc2014]: as a page
  51. Tatsuya Abe, Shinpei Hayashi, Motoshi Saeki: "Modeling Security Threat Patterns to Derive Negative Scenarios". In Proceedings of the 20th Asia-Pacific Software Engineering Conference (APSEC 2013), pp. 58-66. Bangkok, Thailand, dec, 2013.
    ID
    DOI: 10.1109/APSEC.2013.19
    Abstract
    The elicitation of security requirements is a crucial issue to develop secure business processes and information systems of higher quality. Although we have several methods to elicit security requirements, most of them do not provide sufficient supports to identify security threats. Since threats do not occur so frequently, like exceptional events, it is much more difficult to determine the potentials of threats exhaustively rather than identifying normal behavior of a business process. To reduce this difficulty, accumulated knowledge of threats obtained from practical setting is necessary. In this paper, we present the technique to model knowledge of threats as patterns by deriving the negative scenarios that realize threats and to utilize them during business process modeling. The knowledge is extracted from Security Target documents, based on the international Common Criteria Standard, and the patterns are described with transformation rules on sequence diagrams. In our approach, an analyst composes normal scenarios of a business process with sequence diagrams, and the threat patterns matched to them derives negative scenarios. Our approach has been demonstrated on several examples, to show its practical application.
    BibTeX
    @inproceedings{abe-apsec2013,
        author = {Tatsuya Abe and Shinpei Hayashi and Motoshi Saeki},
        title = {Modeling Security Threat Patterns to Derive Negative Scenarios},
        booktitle = {Proceedings of the 20th Asia-Pacific Software Engineering Conference},
        pages = {58--66},
        year = 2013,
        month = {dec},
    }
    [abe-apsec2013]: as a page
  52. Hiroshi Kazato, Shinpei Hayashi, Tsuyoshi Oshima, Shunsuke Miyata, Takashi Hoshino, Motoshi Saeki: "Extracting and Visualizing Implementation Structure of Features". In Proceedings of the 20th Asia-Pacific Software Engineering Conference (APSEC 2013), pp. 476-484. Bangkok, Thailand, dec, 2013.
    ID
    DOI: 10.1109/APSEC.2013.69
    Abstract
    Feature location is an activity to identify correspondence between features in a system and program elements in source code. After a feature is located, developers need to understand implementation structure around the location from static and/or behavioral points of view. This paper proposes a semi-automatic technique both for locating features and exposing their implementation structures in source code, using a combination of dynamic analysis and two data analysis techniques, sequential pattern mining and formal concept analysis. We have implemented our technique in a supporting tool and applied it to an example of a web application. The result shows that the proposed technique is not only feasible but helpful to understand implementation of features just after they are located.
    BibTeX
    @inproceedings{kazato-apsec2013,
        author = {Hiroshi Kazato and Shinpei Hayashi and Tsuyoshi Oshima and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki},
        title = {Extracting and Visualizing Implementation Structure of Features},
        booktitle = {Proceedings of the 20th Asia-Pacific Software Engineering Conference},
        pages = {476--484},
        year = 2013,
        month = {dec},
    }
    [kazato-apsec2013]: as a page
  53. Shinpei Hayashi, Sirinut Thangthumachit, Motoshi Saeki: "REdiffs: Refactoring-Aware Difference Viewer for Java". In Proceedings of the 20th Working Conference on Reverse Engineering (WCRE 2013), Tool Demonstrations Track, pp. 487-488. Koblenz-Landau, Germany, oct, 2013.
    ID
    DOI: 10.1109/WCRE.2013.6671331
    Abstract
    Comparing and understanding differences between old and new versions of source code are necessary in various software development situations. However, if changes are tangled with refactorings in a single revision, then the resulting source code differences are more complicated. We propose an interactive difference viewer which enables us to separate refactoring effects from source code differences for improving the understandability of the differences.
    BibTeX
    @inproceedings{hayashi-wcre2013,
        author = {Shinpei Hayashi and Sirinut Thangthumachit and Motoshi Saeki},
        title = {REdiffs: Refactoring-Aware Difference Viewer for Java},
        booktitle = {Proceedings of the 20th Working Conference on Reverse Engineering},
        pages = {487--488},
        year = 2013,
        month = {oct},
    }
    [hayashi-wcre2013]: as a page
  54. Takashi Ishio, Shinpei Hayashi, Hiroshi Kazato, Tsuyoshi Oshima: "On the Effectiveness of Accuracy of Automated Feature Location Technique". In Proceedings of the 20th Working Conference on Reverse Engineering (WCRE 2013), pp. 381-390. Koblenz-Landau, Germany, oct, 2013.
    ID
    DOI: 10.1109/WCRE.2013.6671313
    Abstract
    Automated feature location techniques have been proposed to extract program elements that are likely to be relevant to a given feature. A more accurate result is expected to enable developers to perform more accurate feature location. However, several experiments assessing traceability recovery have shown that analysts cannot utilize an accurate traceability matrix for their tasks. Because feature location deals with a certain type of traceability links, it is an important question whether the same phenomena are visible in feature location or not. To answer that question, we have conducted a controlled experiment. We have asked 20 subjects to locate features using lists of methods of which the accuracy is controlled artificially. The result differs from the traceability recovery experiments. Subjects given an accurate list would be able to locate a feature more accurately. However, subjects could not locate the complete implementation of features in 83% of tasks. Results show that the accuracy of automated feature location techniques is effective, but it might be insufficient for perfect feature location.
    BibTeX
    @inproceedings{ishio-wcre2013,
        author = {Takashi Ishio and Shinpei Hayashi and Hiroshi Kazato and Tsuyoshi Oshima},
        title = {On the Effectiveness of Accuracy of Automated Feature Location Technique},
        booktitle = {Proceedings of the 20th Working Conference on Reverse Engineering},
        pages = {381--390},
        year = 2013,
        month = {oct},
    }
    [ishio-wcre2013]: as a page
  55. Hiroshi Kazato, Shinpei Hayashi, Takashi Kobayashi, Tsuyoshi Oshima, Satoshi Okada, Shunsuke Miyata, Takashi Hoshino, Motoshi Saeki: "Incremental Feature Location and Identification in Source Code". In Proceedings of the 17th European Conference on Software Maintenance and Reengineering (CSMR 2013), ERA Track, pp. 371-374. Genova, Italy, mar, 2013.
    ID
    DOI: 10.1109/CSMR.2013.52
    Abstract
    Feature location (FL) in source code is an important task for program understanding. Existing dynamic FL techniques depend on sufficient scenarios for exercising the features to be located. However, it is difficult to prepare such scenarios because it involves a correct understanding of the features. This paper proposes an incremental technique for refining the identification of features integrated with the existing FL technique using formal concept analysis. In our technique, we classify the differences of static and dynamic dependencies of method invocations based on their relevance to the identified features. According to the classification, the technique suggests method invocations to exercise unexplored part of the features. An application example indicates the effectiveness of the approach.
    Slide
    BibTeX
    @inproceedings{kazato-csmr2013,
        author = {Hiroshi Kazato and Shinpei Hayashi and Takashi Kobayashi and Tsuyoshi Oshima and Satoshi Okada and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki},
        title = {Incremental Feature Location and Identification in Source Code},
        booktitle = {Proceedings of the 17th European Conference on Software Maintenance and Reengineering},
        pages = {371--374},
        year = 2013,
        month = {mar},
    }
    [kazato-csmr2013]: as a page
  56. Haruhiko Kaiya, Shunsuke Morita, Shinpei Ogata, Kenji Kaijiri, Shinpei Hayashi, Motoshi Saeki: "Model Transformation Patterns for Introducing Suitable Information Systems". In Proceedings of the 19th Asia-Pacific Software Engineering Conference (APSEC 2012), pp. 434-439. Hong Kong, dec, 2012.
    ID
    DOI: 10.1109/APSEC.2012.52
    Abstract
    When information systems are introduced in a social setting such as a business, the systems will give bad and good impacts on stakeholders in the setting. Requirements analysts have to predict such impacts in advance because stakeholders cannot decide whether the systems are really suitable for them without such prediction. In this paper, we propose a method based on model transformation patterns for introducing suitable information systems. We use metrics of a model to predict whether a system introduction is suitable for a social setting. Through a case study, we show our method can avoid an introduction of a system, which was actually bad for some stakeholders. In the case study, we use a strategic dependency model in i* to specify the model of systems and stakeholders, and attributed graph grammar for model transformation. We focus on the responsibility and the satisfaction of stakeholders as the criteria for suitability about systems introduction in this case study.
    BibTeX
    @inproceedings{kaiya-apsec2012,
        author = {Haruhiko Kaiya and Shunsuke Morita and Shinpei Ogata and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki},
        title = {Model Transformation Patterns for Introducing Suitable Information Systems},
        booktitle = {Proceedings of the 19th Asia-Pacific Software Engineering Conference},
        pages = {434--439},
        year = 2012,
        month = {dec},
    }
    [kaiya-apsec2012]: as a page
  57. Teppei Kato, Shinpei Hayashi, Motoshi Saeki: "Cutting a Method Call Graph for Supporting Feature Location". In Proceedings of the 4th International Workshop on Empirical Software Engineering in Practice (IWESEP 2012), pp. 55-57. Osaka, Japan, oct, 2012.
    ID
    DOI: 10.1109/IWESEP.2012.17
    Abstract
    This paper proposes a technique for locating the implementation of features by combining techniques of a graph cut and a formal concept analysis based on methods and scenarios.
    BibTeX
    @inproceedings{kato-iwesep2012,
        author = {Teppei Kato and Shinpei Hayashi and Motoshi Saeki},
        title = {Cutting a Method Call Graph for Supporting Feature Location},
        booktitle = {Proceedings of the 4th International Workshop on Empirical Software Engineering in Practice},
        pages = {55--57},
        year = 2012,
        month = {oct},
    }
    [kato-iwesep2012]: as a page
  58. Katsuhisa Maruyama, Eijiro Kitsu, Takayuki Omori, Shinpei Hayashi: "Slicing and Replaying Code Change History". In Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering (ASE 2012), Short paper session, pp. 246-249. Essen, Germany, sep, 2012.
    ID
    DOI: 10.1145/2351676.2351713
    Abstract
    Change-aware development environments have recently become feasible and reasonable. These environments can automatically record fine-grained code changes on a program and allow programmers to replay the recorded changes in chronological order. However, they do not always need to replay all the code changes to investigate how a particular entity of the program has been changed. Therefore, they often skip several code changes of no interest. This skipping action is an obstacle that makes many programmers hesitate in using existing replaying tools. This paper proposes a slicing mechanism that can extract only code changes necessary to construct a particular class member of a Java program from the whole history of past code changes. In this mechanism, fine-grained code changes are represented by edit operations recorded on source code of a program. The paper also presents a running tool that implements the proposed slicing and replays its resulting slices. With this tool, programmers can avoid replaying edit operations nonessential to the construction of class members they want to understand.
    BibTeX
    @incollection{maruyama-ase2012,
        author = {Katsuhisa Maruyama and Eijiro Kitsu and Takayuki Omori and Shinpei Hayashi},
        title = {Slicing and Replaying Code Change History},
        booktitle = {Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering},
        pages = {246--249},
        year = 2012,
        month = {sep},
    }
    [maruyama-ase2012]: as a page
  59. Shinpei Hayashi, Takayuki Omori, Teruyoshi Zenmyo, Katsuhisa Maruyama, Motoshi Saeki: "Refactoring Edit History of Source Code". In Proceedings of the 28th IEEE International Conference on Software Maintenance (ICSM 2012), ERA Track, pp. 617-620. Riva del Garda, Trento, Italy, sep, 2012.
    ID
    DOI: 10.1109/ICSM.2012.6405336
    Abstract
    This paper proposes a concept for refactoring an edit history of source code and a technique for its automation. The aim of our history refactoring is to improve the clarity and usefulness of the history without changing its overall effect. We have defined primitive history refactorings including their preconditions and procedures, and large refactorings composed of these primitives. Moreover, we have implemented a supporting tool that automates the application of history refactorings in the middle of a source code editing process. Our tool enables developers to pursue some useful applications using history refactorings such as task level commit from an entangled edit history and selective undo of past edit operations.
    Slide
    BibTeX
    @inproceedings{hayashi-icsm2012,
        author = {Shinpei Hayashi and Takayuki Omori and Teruyoshi Zenmyo and Katsuhisa Maruyama and Motoshi Saeki},
        title = {Refactoring Edit History of Source Code},
        booktitle = {Proceedings of the 28th IEEE International Conference on Software Maintenance},
        pages = {617--620},
        year = 2012,
        month = {sep},
    }
    [hayashi-icsm2012]: as a page
  60. Haruhiko Kaiya, Shunsuke Morita, Kenji Kaijiri, Shinpei Hayashi, Motoshi Saeki: "Facilitating Business Improvement by Information Systems using Model Transformation and Metrics". In Proceedings of the CAiSE'12 Forum at the 24th International Conference on Advanced Information Systems Engineering (CAiSE 2012), pp. 106-113. Gdańsk, Poland, jun, 2012.
    URL
    http://ceur-ws.org/Vol-855/paper13.pdf
    Abstract
    We propose a method to explore how to improve business by introducing information systems. We use a meta-modeling technique to specify the business itself and its metrics. The metrics are defined based on the structural information of the business model, so that they can help us to identify whether the business is good or not with respect to several different aspects. We also use a model transformation technique to specify an idea of the business improvement. The metrics help us to predict whether the improvement idea makes the business better or not. We use strategic dependency (SD) models in i* to specify the business, and attributed graph grammar (AGG) for the model transformation.
    BibTeX
    @inproceedings{kaiya-caise2012,
        author = {Haruhiko Kaiya and Shunsuke Morita and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki},
        title = {Facilitating Business Improvement by Information Systems using Model Transformation and Metrics},
        booktitle = {Proceedings of the CAiSE'12 Forum at the 24th International Conference on Advanced Information Systems Engineering},
        pages = {106--113},
        year = 2012,
        month = {jun},
    }
    [kaiya-caise2012]: as a page
  61. Hiroshi Kazato, Shinpei Hayashi, Satoshi Okada, Shunsuke Miyata, Takashi Hoshino, Motoshi Saeki: "Toward Structured Location of Features". In Proceedings of the 20th IEEE International Conference on Program Comprehension (ICPC 2012), Poster Session, pp. 255-256. Passau, Germany, jun, 2012.
    ID
    DOI: 10.1109/ICPC.2012.6240497
    Abstract
    This paper proposes structured location, a semiautomatic technique and its supporting tool both for locating features and exposing their structures in source code, using a combination of dynamic analysis, sequential pattern mining and formal concept analysis.
    Slide
    BibTeX
    @inproceedings{kazato-icpc2012,
        author = {Hiroshi Kazato and Shinpei Hayashi and Satoshi Okada and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki},
        title = {Toward Structured Location of Features},
        booktitle = {Proceedings of the 20th IEEE International Conference on Program Comprehension},
        pages = {255--256},
        year = 2012,
        month = {jun},
    }
    [kazato-icpc2012]: as a page
  62. Hiroshi Kazato, Shinpei Hayashi, Satoshi Okada, Shunsuke Miyata, Takashi Hoshino, Motoshi Saeki: "Feature Location for Multi-Layer System Based on Formal Concept Analysis". In Proceedings of the 16th European Conference on Software Maintenance and Reengineering (CSMR 2012), pp. 429-434. Szeged, Hungary, mar, 2012.
    ID
    DOI: 10.1109/CSMR.2012.54
    Abstract
    Locating features in software composed of multiple layers is a challenging problem because we have to find program elements distributed over layers, which still work together to constitute a feature. This paper proposes a semi-automatic technique to extract correspondence between features and program elements among layers. By merging execution traces of each layer to feed into formal concept analysis, collaborative program elements are grouped into formal concepts and tied with a set of execution scenarios. We applied our technique to an example of web application composed of three layers. The result indicates that our technique is not only feasible but promising to promote program understanding in a more realistic context.
    Slide
    BibTeX
    @inproceedings{kazato-csmr2012,
        author = {Hiroshi Kazato and Shinpei Hayashi and Satoshi Okada and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki},
        title = {Feature Location for Multi-Layer System Based on Formal Concept Analysis},
        booktitle = {Proceedings of the 16th European Conference on Software Maintenance and Reengineering},
        pages = {429--434},
        year = 2012,
        month = {mar},
    }
    [kazato-csmr2012]: as a page
  63. Sirinut Thangthumachit, Shinpei Hayashi, Motoshi Saeki: "Understanding Source Code Differences by Separating Refactoring Effects". In Proceedings of the 18th Asia Pacific Software Engineering Conference (APSEC 2011), pp. 339-347. Ho Chi Minh city, Vietnam, dec, 2011.
    ID
    DOI: 10.1109/APSEC.2011.47
    Abstract
    Comparing and understanding differences between old and new versions of source code are necessary in various software development situations. However, if refactoring is applied between those versions, then the source code differences are more complicated, and understanding them becomes more difficult. Although many techniques for extracting refactoring effects from the differences have been studied, it is necessary to exclude the extracted refactorings' effects and reconstruct the differences for meaningful and understandable ones with no refactoring effect. As described in this paper, we propose a novel technique to address this difficulty. Using our technique, we extract the refactoring effects and then apply them to the old version of source code to produce the differences without refactoring effects. We also implemented a support tool that helps separate refactorings automatically. An evaluation of open source software showed that our tool is applicable to all target refactorings. Our technique is therefore useful in real situations. Evaluation testing also demonstrated that the approach reduced the code differences more than 21\%, on average, and that developers can understand more changes from the differences using our approach than when using the original one in the same limited time.
    Slide
    BibTeX
    @inproceedings{zui-apsec2011,
        author = {Sirinut Thangthumachit and Shinpei Hayashi and Motoshi Saeki},
        title = {Understanding Source Code Differences by Separating Refactoring Effects},
        booktitle = {Proceedings of the 18th Asia Pacific Software Engineering Conference},
        pages = {339--347},
        year = 2011,
        month = {dec},
    }
    [zui-apsec2011]: as a page
  64. Motohiro Akiyama, Shinpei Hayashi, Takashi Kobayashi, Motoshi Saeki: "Supporting Design Model Refactoring for Improving Class Responsibility Assignment". In Proceedings of the ACM/IEEE 14th International Conference on Model Driven Engineering Languages and Systems (MODELS 2011), Lecture Notes in Computer Science, vol. 6981, pp. 455-469. Wellington, New Zealand, oct, 2011.
    ID
    DOI: 10.1007/978-3-642-24485-8_33
    Abstract
    Although a responsibility driven approach in object oriented analysis and design methodologies is promising, the assignment of the identified responsibilities to classes (simply, class responsibility assignment: CRA) is a crucial issue to achieve design of higher quality. The GRASP by Larman is a guideline for CRA and is being put into practice. However, since it is described in an informal way using a natural language, its successful usage greatly relies on designers' skills. This paper proposes a technique to represent GRASP formally and to automate appropriate CRA based on them. Our computerized tool automatically detects inappropriate CRA and suggests alternatives of appropriate CRAs to designers so that they can improve a CRA based on the suggested alternatives. We made preliminary experiments to show the usefulness of our tool.
    Slide
    BibTeX
    @inproceedings{akiyama-models2011,
        author = {Motohiro Akiyama and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki},
        title = {Supporting Design Model Refactoring for Improving Class Responsibility Assignment},
        booktitle = {Proceedings of the ACM/IEEE 14th International Conference on Model Driven Engineering Languages and Systems},
        pages = {455--469},
        year = 2011,
        month = {oct},
    }
    [akiyama-models2011]: as a page
  65. Shinpei Hayashi, Takashi Yoshikawa, Motoshi Saeki: "Sentence-to-Code Traceability Recovery with Domain Ontologies". In Proceedings of the 17th Asia Pacific Software Engineering Conference (APSEC 2010), pp. 385-394. Sydney, Australia, nov, 2010.
    ID
    DOI: 10.1109/APSEC.2010.51
    Abstract
    We propose an ontology-based technique for recovering traceability links between a natural language sentence specifying features of a software product and the source code of the product. Some software products have been released without detailed documentation. To automatically detect code fragments associated with sentences describing a feature, the relations between source code structures and problem domains are important. We model the knowledge of the problem domains as domain ontologies having concepts of the domains and their relations. Using semantic relations on the ontologies in addition to method invocation relations and the similarity between an identifier on the code and words in the sentences, we locate the code fragments corresponding to the given sentences. Additionally, our prioritization mechanism which orders the located results of code fragments based on the ontologies enables users to select and analyze the results effectively. To show effectiveness of our approach in terms of accuracy, a case study was carried out with our proof-ofconcept tool and summarized.
    Slide
    BibTeX
    @inproceedings{hayashi-apsec2010,
        author = {Shinpei Hayashi and Takashi Yoshikawa and Motoshi Saeki},
        title = {Sentence-to-Code Traceability Recovery with Domain Ontologies},
        booktitle = {Proceedings of the 17th Asia Pacific Software Engineering Conference},
        pages = {385--394},
        year = 2010,
        month = {nov},
    }
    [hayashi-apsec2010]: as a page
  66. Takanori Ugai, Shinpei Hayashi, Motoshi Saeki: "Visualizing Stakeholder Concerns with Anchored Map". In Proceedings of the 5th International Workshop on Requirements Engineering Visualization (REV 2010), co-located with RE 2010, pp. 20-24. Sydney, Australia, sep, 2010.
    ID
    DOI: 10.1109/REV.2010.5625662
    Abstract
    Software development is a cooperative work by stakeholders. It is important for project managers and analysts to understand stakeholder concerns and to identify potential problems such as imbalance of stakeholders or lack of stakeholders. This paper presents a tool which visualizes the strength of stakeholders' interest of concern on two dimensional screens. The proposed tool generates an anchored map from an attributed goal graph by AGORA, which is an extended version of goal-oriented analysis methods. It has information on stakeholders' interest to concerns and its degree as the attributes of goals. Results from the case study are that (1) some concerns are not connected to any stakeholders and (2) a type of stakeholders is interested in different concerns each other. The results suggest that lack of stakeholders for the unconnected concerns and need that a type of stakeholders had better to unify their requirements.
    Slide
    BibTeX
    @inproceedings{ugai-rev2010,
        author = {Takanori Ugai and Shinpei Hayashi and Motoshi Saeki},
        title = {Visualizing Stakeholder Concerns with Anchored Map},
        booktitle = {Proceedings of the 5th International Workshop on Requirements Engineering Visualization},
        pages = {20--24},
        year = 2010,
        month = {sep},
    }
    [ugai-rev2010]: as a page
  67. Shinpei Hayashi, Katsuyuki Sekine, Motoshi Saeki: "iFL: An Interactive Environment for Understanding Feature Implementations". In Proceedings of the 26th IEEE International Conference on Software Maintenance (ICSM 2010), ERA Track, pp. 1-5. Timisoara, Romania, sep, 2010.
    ID
    DOI: 10.1109/ICSM.2010.5609669
    Abstract
    We propose iFL, an interactive environment that is useful for effectively understanding feature implementation by application of feature location (FL). With iFL, the inputs for FL are improved incrementally by interactions between users and the FL system. By understanding a code fragment obtained using FL, users can find more appropriate queries from the identifiers in the fragment. Furthermore, the relevance feedback obtained by partially judging whether or not a fragment is relevant improves the evaluation score of FL. Users can then obtain more accurate results. Case studies with iFL show that our interactive approach is feasible and that it can reduce the understanding cost more effectively than the non-interactive approach.
    Slide
    BibTeX
    @inproceedings{hayashi-icsm2010,
        author = {Shinpei Hayashi and Katsuyuki Sekine and Motoshi Saeki},
        title = {{iFL}: An Interactive Environment for Understanding Feature Implementations},
        booktitle = {Proceedings of the 26th IEEE International Conference on Software Maintenance},
        pages = {1--5},
        year = 2010,
        month = {sep},
    }
    [hayashi-icsm2010]: as a page
  68. Shinpei Hayashi, Motoshi Saeki: "Recording Finer-Grained Software Evolution with IDE: An Annotation-Based Approach". In Proceedings of the 4th International Joint ERCIM/IWPSE Symposium on Software Evolution (IWPSE-EVOL 2010), co-located with ASE 2010, pp. 8-12. Antwerp, Belgium, sep, 2010.
    ID
    DOI: 10.1145/1862372.1862378
    ISBN: 978-1-4503-0128-2
    Abstract
    This paper proposes a formalized technique for generating finer-grained source code deltas according to a developer's editing intentions. Using the technique, the developer classifies edit operations of source code by annotating the time series of the edit history with the switching information of their editing intentions. Based on the classification, the history is sorted and converted automatically to appropriate source code deltas to be committed separately to a version repository. This paper also presents algorithms for automating the generation process and a prototyping tool to implement them.
    Slide
    BibTeX
    @inproceedings{hayashi-iwpse-evol2010,
        author = {Shinpei Hayashi and Motoshi Saeki},
        title = {Recording Finer-Grained Software Evolution with {IDE}: An Annotation-Based Approach},
        booktitle = {Proceedings of the 4th International Joint ERCIM/IWPSE Symposium on Software Evolution},
        pages = {8--12},
        year = 2010,
        month = {sep},
    }
    [hayashi-iwpse-evol2010]: as a page
  69. Motoshi Saeki, Shinpei Hayashi, Haruhiko Kaiya: "An Integrated Support for Attributed Goal-Oriented Requirements Analysis Method and Its Implementation". In Proceedings of the 10th International Conference on Quality Software (QSIC 2010), pp. 357-360. jul, 2010.
    ID
    DOI: 10.1109/QSIC.2010.19
    Abstract
    This paper presents an integrated supporting tool for Attributed Goal-Oriented Requirements Analysis (AGORA), which is an extended version of goal-oriented analysis. Our tool assists seamlessly requirements analysts and stakeholders in their activities throughout AGORA steps including constructing goal graphs with group work, utilizing domain ontologies for goal graph construction, detecting various types of conflicts among goals, prioritizing goals, analyzing impacts when modifying a goal graph, and version control of goal graphs.
    BibTeX
    @inproceedings{saeki-qsic2010,
        author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya},
        title = {An Integrated Support for Attributed Goal-Oriented Requirements Analysis Method and Its Implementation},
        booktitle = {Proceedings of the 10th International Conference on Quality Software},
        pages = {357--360},
        year = 2010,
        month = {jul},
    }
    [saeki-qsic2010]: as a page
  70. Motoshi Saeki, Shinpei Hayashi, Haruhiko Kaiya: "A Tool for Attributed Goal-Oriented Requirements Analysis". In Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering (ASE 2009), pp. 670-672. Auckland, New Zealand, nov, 2009.
    ID
    DOI: 10.1109/ASE.2009.34
    Abstract
    This paper presents an integrated supporting tool for Attributed Goal-Oriented Requirements Analysis (AGORA), which is an extended version of goal-oriented analysis. Our tool assists seamlessly requirements analysts and stakeholders in their activities throughout AGORA steps including constructing goal graphs with group work, prioritizing goals, and version control of goal graphs.
    BibTeX
    @inproceedings{saeki-ase2009,
        author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya},
        title = {A Tool for Attributed Goal-Oriented Requirements Analysis},
        booktitle = {Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering},
        pages = {670--672},
        year = 2009,
        month = {nov},
    }
    [saeki-ase2009]: as a page
  71. Hiroshi Kazato, Rafael Weiss, Shinpei Hayashi, Takashi Kobayashi, Motoshi Saeki: "Model-View-Controller Architecture Specific Model Transformation". In Proceedings of the 9th OOPSLA Workshop on Domain-Specific Modeling (DSM 2009), co-located with OOPSLA 2009. Orlando, Florida, USA, oct, 2009.
    Abstract
    In this paper, we propose a model-driven development technique specific to the Model-View-Controller architecture domain. Even though a lot of application frameworks and source code generators are available for implementing this architecture, they do depend on implementation specific concepts, which take much effort to learn and use them. To address this issue, we define a UML profile to capture architectural concepts directly in a model and provide a bunch of transformation mappings for each supported platform, in order to bridge between architectural and implementation concepts. By applying these model transformations together with source code generators, our MVC-based model can be mapped to various kind of platforms. Since we restrict a domain into MVC architecture only, automating model transformation to source code is possible. We have prototyped a supporting tool and evaluated feasibility of our approach through a case study. It demonstrates model transformations specific to MVC architecture can produce source code for two different platforms.
    BibTeX
    @inproceedings{kazato-dsm2009,
        author = {Hiroshi Kazato and Rafael Weiss and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki},
        title = {Model-View-Controller Architecture Specific Model Transformation},
        booktitle = {Proceedings of the 9th OOPSLA Workshop on Domain-Specific Modeling},
        year = 2009,
        month = {oct},
    }
    [kazato-dsm2009]: as a page
  72. Rodion Moiseev, Shinpei Hayashi, Motoshi Saeki: "Generating Assertion Code from OCL: A Transformational Approach Based on Similarities of Implementation Languages". In Proceedings of the ACM/IEEE 12th International Conference on Model Driven Engineering Languages and Systems (MODELS 2009), Lecture Notes in Computer Science, vol. 5795, pp. 650-664. Denver, Colorado, USA, oct, 2009.
    ID
    DOI: 10.1007/978-3-642-04425-0_52
    Abstract
    The Object Constraint Language (OCL) carries a platform independent characteristic allowing it to be decoupled from implementation details, and therefore it is widely applied in model transformations used by model-driven development techniques. However, OCL can be found tremendously useful in the implementation phase aiding assertion code generation and allowing system verification. Yet, taking full advantage of OCL without destroying its platform independence is a difficult task. This paper proposes an approach for generating assertion code from OCL constraints by using a model transformation technique to abstract language specific details away from OCL high-level concepts, showing wide applicability of model transformation techniques. We take advantage of structural similarities of implementation languages to describe a rewriting framework, which is used to easily and flexibly reformulate OCL constraints into any target language, making them executable on any platform. A tool is implemented to demonstrate the effectiveness of this approach.
    Slide
    BibTeX
    @inproceedings{rodion-models2009,
        author = {Rodion Moiseev and Shinpei Hayashi and Motoshi Saeki},
        title = {Generating Assertion Code from OCL: A Transformational Approach Based on Similarities of Implementation Languages},
        booktitle = {Proceedings of the ACM/IEEE 12th International Conference on Model Driven Engineering Languages and Systems},
        pages = {650--664},
        year = 2009,
        month = {oct},
    }
    [rodion-models2009]: as a page
  73. Takashi Yoshikawa, Shinpei Hayashi, Motoshi Saeki: "Recovering Traceability Links between a Simple Natural Language Sentence and Source Code Using Domain Ontologies". In Proceedings of the 25th International Conference on Software Maintenance (ICSM 2009), pp. 551-554. Edmonton, Canada, sep, 2009.
    ID
    DOI: 10.1109/ICSM.2009.5306390
    URL
    https://sites.google.com/site/ieeeicsm09/
    Abstract
    This paper proposes an ontology-based technique for recovering traceability links between a natural language sentence specifying features of a software product and the source code of the product. Some software products have been released without detailed documentation. To automatically detect code fragments associated with the functional descriptions written in the form of simple sentences, the relationships between source code structures and problem domains are important. In our approach, we model the knowledge of the problem domains as domain ontologies. By using semantic relationships of the ontologies in addition to method invocation relationships and the similarity between an identifier on the code and words in the sentences, we can detect code fragments corresponding to the sentences. A case study within a domain of painting software shows that we obtained results of higher quality than without ontologies.
    BibTeX
    @inproceedings{yoshikawa-icsm2009,
        author = {Takashi Yoshikawa and Shinpei Hayashi and Motoshi Saeki},
        title = {Recovering Traceability Links between a Simple Natural Language Sentence and Source Code Using Domain Ontologies},
        booktitle = {Proceedings of the 25th International Conference on Software Maintenance},
        pages = {551--554},
        year = 2009,
        month = {sep},
    }
    [yoshikawa-icsm2009]: as a page
  74. Kohei Uno, Shinpei Hayashi, Motoshi Saeki: "Constructing Feature Models using Goal-Oriented Analysis". In Proceedings of the 9th International Conference on Quality Software (QSIC 2009), pp. 412-417. aug, 2009.
    ID
    DOI: 10.1109/QSIC.2009.61
    Abstract
    This paper proposes a systematic approach to derive feature models required in a software product line development. In our approach, we use goal graphs constructed by goal-oriented requirements analysis. We merge multiple goal graphs into a graph, and then regarding the leaves of the merged graph as the candidates of features, identify their commonality and variability based on the achievability of product goals. Through a case study of a portable music player domain, we obtained a feature model with high quality.
    BibTeX
    @inproceedings{uno-qsic2009,
        author = {Kohei Uno and Shinpei Hayashi and Motoshi Saeki},
        title = {Constructing Feature Models using Goal-Oriented Analysis},
        booktitle = {Proceedings of the 9th International Conference on Quality Software},
        pages = {412--417},
        year = 2009,
        month = {aug},
    }
    [uno-qsic2009]: as a page
  75. Shinpei Hayashi, Yasuyuki Tsuda, Motoshi Saeki: "Detecting Occurrences of Refactoring with Heuristic Search". In Proceedings of the 15th Asia-Pacific Software Engineering Conference (APSEC 2008), pp. 453-460. Beijing, China, dec, 2008.
    ID
    DOI: 10.1109/APSEC.2008.9
    ISSN: 1530-1362
    ISBN: 978-0-7695-3446-6
    Abstract
    This paper proposes a novel technique to detect the occurrences of refactoring from a version archive, in order to reduce the effort spent in understanding what modifications have been applied. In a real software development process, a refactoring operation may sometimes be performed together with other modifications at the same revision. This means that understanding the differences between two versions stored in the archive is not usually an easily process. In order to detect these impure refactorings, we model the detection within a graph search. Our technique considers a version of a program as a state and a refactoring as a transition. It then searches for the path that approaches from the initial to the final state. To improve the efficiency of the search, we use the source code differences between the current and the final state for choosing the candidates of refactoring to be applied next and estimating the heuristic distance to the final state. We have clearly demonstrated the feasibility of our approach through a case study.
    Slide
    BibTeX
    @inproceedings{hayashi-apsec2008,
        author = {Shinpei Hayashi and Yasuyuki Tsuda and Motoshi Saeki},
        title = {Detecting Occurrences of Refactoring with Heuristic Search},
        booktitle = {Proceedings of the 15th Asia-Pacific Software Engineering Conference},
        pages = {453--460},
        year = 2008,
        month = {dec},
    }
    [hayashi-apsec2008]: as a page
  76. Takeshi Obayashi, Shinpei Hayashi, Motoshi Saeki, Hiroyuki Ohta, Kengo Kinoshita: "Preperation and usage of gene coexpression data". In the 19th International Conference on Arabidopsis Research (ICAR 2008). Montreal, Canada, jun, 2008.
    Abstract
    Gene coexpression provides key information to understand living systems because coexpressed genes are often involved in the same or related biological pathways. Coexpression data are now used for a wide variety of experimental designs, such as gene targeting, regulatory investigations and/or identification of potential partners in protein-protein interactions. We constructed two databases for Arabidopsis (ATTED-II, http://www.atted.bio.titech.ac.jp) and mammals (COXPRESdb, http://coxpresdb.hgc.jp). Based on pairwise gene coexpression, coexpressed gene networks were prepared in these databases. To support gene coexpression, known protein-protein interactions, common metabolic pathways and conserved coexpression were also represented on the networks. We used Google Maps API to visualize large networks interactively. The relationships of the coexpression database with other large-scale data will be discussed, in addition to data construction procedures and typical usages of coexpression data.
    BibTeX
    @misc{obayashi-icar2008,
        author = {Takeshi Obayashi and Shinpei Hayashi and Motoshi Saeki and Hiroyuki Ohta and Kengo Kinoshita},
        title = {Preperation and usage of gene coexpression data},
        howpublished = {In the 19th International Conference on Arabidopsis Research},
        year = 2008,
        month = {jun},
    }
    [obayashi-icar2008]: as a page
  77. Shinpei Hayashi, Motoshi Saeki: "Extracting Prehistories of Software Refactorings from Version Archives". In Large-Scale Knowledge Resources. Construction and Application - Proceedings of the 3rd International Conference on Large-Scale Knowledge Resources (LKR 2008), Lecture Notes in Artificial Intelligence, vol. 4938, pp. 82-89. Tokyo Institute of Technology (Ookayama Campus), Tokyo, Japan, mar, 2008.
    ID
    DOI: 10.1007/978-3-540-78159-2_9
    Abstract
    This paper proposes an automated technique to extract prehistories of software refactorings from existing software version archives, which in turn a technique to discover knowledge for finding refactoring opportunities. We focus on two types of knowledge to extract: characteristic modification histories, and fluctuations of the values of complexity measures. First, we extract modified fragments of code by calculating the difference of the Abstract Syntax Trees in the programs picked up from an existing software repository. We also extract past cases of refactorings, and then we create traces of program elements by associating modified fragments with cases of refactorings for finding the structures that frequently occur. Extracted traces help us identify how and where to refactor programs, and it leads to improve the program design.
    BibTeX
    @inproceedings{hayashi-lkr2008,
        author = {Shinpei Hayashi and Motoshi Saeki},
        title = {Extracting Prehistories of Software Refactorings from Version Archives},
        booktitle = {Large-Scale Knowledge Resources. Construction and Application -- Proceedings of the 3rd International Conference on Large-Scale Knowledge Resources},
        pages = {82--89},
        year = 2008,
        month = {mar},
    }
    [hayashi-lkr2008]: as a page
  78. Shinpei Hayashi, Motoshi Saeki: "Eclipse Plug-ins for Collecting and Analyzing Program Modifications". In Eclipse Technology eXchange Workshop (ETX 2006), co-located with OOPSLA 2006, Poster Session. Oregon Convention Center, Portland, Oregon, USA, oct, 2006.
    Abstract
    In this poster, we discuss the need for collecting and analyzing program modification histories, sequences of fine-grained program editing operations. Then we introduce Eclipse plug-ins that can collect and analyze modification histories, and show its useful application technique that can suggest suitable refactoring opportunities to developers by analyzing histories.
    BibTeX
    @misc{hayashi-etx2006,
        author = {Shinpei Hayashi and Motoshi Saeki},
        title = {Eclipse Plug-ins for Collecting and Analyzing Program Modifications},
        howpublished = {In Eclipse Technology eXchange Workshop},
        year = 2006,
        month = {oct},
    }
    [hayashi-etx2006]: as a page

Services

Program Committee Membership

Steering/Organizing Committee Membership

Awards

  1. 東京工業大学 平成29年度 挑戦的研究賞
  2. The 15th International Conference on Intelligent Software Methodologies, Tools and Techniques Best Paper Award
  3. 電子情報通信学会ソフトウェアサイエンス研究会研究奨励賞 at Jul., 2016.
  4. 電子情報通信学会ソフトウェアサイエンス研究会研究奨励賞 at Jul., 2016.
  5. 貢献賞 at FOSE 2013, Nov. 30, 2013.
  6. Yamashita SIG Research Award from IPSJ, Mar. 7, 2012.
  7. IEEE Computer Society Japan Chapter FOSE Young Researcher Award at FOSE 2011, Nov. 26, 2011.
  8. Best Paper Award from SES 2010, Aug. 31, 2010.
  9. Seiichi Tejima Doctoral Dissertation Award from Tokyo Institute of Technology, Feb. 24, 2010.
  10. Clark Awards 2003 from Hokkaido University, Mar. 24, 2004.
  11. The Best Score Award from Programming Contest 2003, IPSJ Hokkaido Branch, Mar. 22, 2003.
--
Shinpei Hayashi [[ e-mail address ]] [PGP pubkey(C5F14DA2)]