HAYASHI, Shinpei at se.cs.titech.ac.jp

[Japanese]

About Me

Shinpei Hayashi is an assistant professor at Department of Computer Science, School of Computing, Tokyo Institute of Technology. He received a B.Eng. degree from Hokkaido University in 2004. He also received M.Eng. and Dr.Eng. degrees from Tokyo Institute of Technology in 2006 and 2008, respectively.

Was

See Also

Contact Addresses

Location
West-8E Bldg. #901, Ookayama Campus, Tokyo Institute of Technology
Address
Ookayama 2-12-1-W8-83, Ookayama, Meguro-ku, Tokyo 152-8552, Japan
Phone/Fax.
+81-3-5734-3920 or skype:hayashi.shinpei

Current Interests

Software Engineering, in particular,

P{ublic,resent}ations

To Be Published

  1. Junzo Kato and Motoshi Saeki and Atsushi Ohnishi and Haruhiko Kaiya and Shinpei Hayashi and Shuichiro Yamamoto: "Supporting Construction of a Thesaurus for Requirements Elicitation" (in Japanese). IPSJ Journal, vol. 57, no. 7. jul, 2016.
  2. Haruhiko Kaiya, Shinpei Ogata, Shinpei Hayashi, Motoshi Saeki: "Early Requirements Analysis for a Socio-Technical System based on Goal Dependencies". In Proceedings of the 15th International Conference on Intelligent Software Methodologies, Tools and Techniques (SOMET 2016). Larnaca, Cyprus, sep, 2016.

Recent Publications (2016)

  1. Natthawute Sae-Lim, Shinpei Hayashi, Motoshi Saeki: "Context-Based Code Smells Prioritization for Prefactoring". In Proceedings of the 24th International Conference on Program Comprehension (ICPC 2016). Austin, Texas, USA, may, 2016.
    Abstract
    To find opportunities for applying prefactoring, several techniques for detecting bad smells in source code have been proposed. Existing smell detectors are often unsuitable for developers who have a specific context because these detectors do not consider their current context and output the results that are mixed with both smells that are and are not related to such context. Consequently, the developers must spend a considerable amount of time identifying relevant smells. As described in this paper, we propose a technique to prioritize bad code smells using developers' context. The explicit data of the context are obtained using a list of issues extracted from an issue tracking system. We applied impact analysis to the list of issues and used the results to specify which smells are associated with the context. Consequently, our approach can provide developers with a list of prioritized bad code smells related to their current context. Several evaluations using open source projects demonstrate the effectiveness of our technique.
    BibTeX
    @inproceedings{natthawute-icpc2016,
        author = {Natthawute Sae-Lim and Shinpei Hayashi and Motoshi Saeki},
        title = {Context-Based Code Smells Prioritization for Prefactoring},
        booktitle = {Proceedings of the 24th International Conference on Program Comprehension},
        year = 2016,
        month = {may},
    }
    [natthawute-icpc2016]: as a page
  2. Katsuhisa Maruyama, Takayuki Omori, Shinpei Hayashi: "Slicing Fine-Grained Code Change History". IEICE Transactions on Information and Systems, vol. E99-D, no. 3, pp. 671-687. mar, 2016.
    ID
    DOI: 10.1587/transinf.2015EDP7282
    Abstract
    Change-aware development environments can automatically record fine-grained code changes on a program and allow programmers to replay the recorded changes in chronological order. However, since they do not always need to replay all the code changes to investigate how a particular entity of the program has been changed, they often eliminate several code changes of no interest by manually skipping them in replaying. This skipping action is an obstacle that makes many programmers hesitate when they use existing replaying tools. This paper proposes a slicing mechanism that automatically removes manually skipped code changes from the whole history of past code changes and extracts only those necessary to build a particular class member of a Java program. In this mechanism, fine-grained code changes are represented by edit operations recorded on the source code of a program and dependencies among edit operations are formalized. The paper also presents a running tool that slices the operation history and replays its resulting slices. With this tool, programmers can avoid replaying nonessential edit operations for the construction of class members that they want to understand. Experimental results show that the tool offered improvements over conventional replaying tools with respect to the reduction of the number of edit operations needed to be examined and over history filtering tools with respect to the accuracy of edit operations to be replayed.
    BibTeX
    @article{maruyama-ieicet201603,
        author = {Katsuhisa Maruyama and Takayuki Omori and Shinpei Hayashi},
        title = {Slicing Fine-Grained Code Change History},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E99-D},
        number = 3,
        pages = {671--687},
        year = 2016,
        month = {mar},
    }
    [maruyama-ieicet201603]: as a page

Papers Published in Academic Journals

  1. 加藤 哲平, 林 晋平, 佐伯 元司: "Combining Dynamic Feature Location with Call Graph Separation" (in Japanese). IEICE Transactions on Information and Systems, vol. J98-D, no. 11, pp. 1374-1376. nov, 2015.
    ID
    DOI: 10.14923/transinfj.2015SSL0001
    Abstract
    形式概念分析を用いた動的な機能捜索手法と呼び出し関係グラフ分割手法を組み合わせ,シナリオの用意が十分でない場合でも精度良く機能に対応するモジュール集合を得る方法について,例題への適用結果に基づき検討する.
    BibTeX
    @article{kato-ieicet-ss2015,
        author = {加藤 哲平 and 林 晋平 and 佐伯 元司},
        title = {Combining Dynamic Feature Location with Call Graph Separation},
        journal = {IEICE Transactions on Information and Systems},
        volume = {J98-D},
        number = 11,
        pages = {1374--1376},
        year = 2015,
        month = {nov},
    }
    [kato-ieicet-ss2015]: as a page
  2. Eunjong Choi and Kenji Fujiwara and Norihiro Yoshida and Shinpei Hayashi: "A Survey of Refactoring Detection Techniques Based on Change History Analysis" (in Japanese). Computer Software, vol. 32, no. 1, pp. 47-59. feb, 2015.
    ID
    DOI: 10.11309/jssst.32.1_47
    Abstract
    Refactoring is the process of changing a software system in such a way that it does not alter the external behavior of the code yet improves its internal structure. Not only researchers but also practitioners need to know past instances of refactoring performed in a software development project. So far, a number of techniques have been proposed on the automatic detection of refactoring instances. Those techniques have been presented in various international conferences and journals, and it is difficult for researchers and practitioners to grasp the current status of studies on refactoring detection techniques. In this survey paper, we introduce refactoring detection techniques, especially in techniques based on change history analysis. At first, we give the definition and the categorization of refactoring detection in this paper, and then introduce refactoring detection techniques based on change history analysis. Finally, we discuss possible future research directions on refactoring detection.
    BibTeX
    @article{choi-jssst-survey2015,
        author = {Eunjong Choi and Kenji Fujiwara and Norihiro Yoshida and Shinpei Hayashi},
        title = {A Survey of Refactoring Detection Techniques Based on Change History Analysis},
        journal = {Computer Software},
        volume = 32,
        number = 1,
        pages = {47--59},
        year = 2015,
        month = {feb},
    }
    [choi-jssst-survey2015]: as a page
  3. Takayuki Omori and Shinpei Hayashi and Katsuhisa Maruyama: "A survey on methods of recording fine-grained operations on integrated development environments and their applications" (in Japanese). Computer Software, vol. 32, no. 1, pp. 60-80. feb, 2015.
    ID
    DOI: 10.11309/jssst.32.1_60
    Abstract
    This paper presents a survey on techniques to record and utilize developers’ operations on integrated development environments (IDEs). Especially, we let techniques treating fine-grained code changes be targets of this survey for reference in software evolution research. We created a three-tiered model to represent the relationships among IDEs, recording techniques, and application techniques. This paper also presents common features of the techniques and their details.
    BibTeX
    @article{omori-jssst-survey2015,
        author = {Takayuki Omori and Shinpei Hayashi and Katsuhisa Maruyama},
        title = {A survey on methods of recording fine-grained operations on integrated development environments and their applications},
        journal = {Computer Software},
        volume = 32,
        number = 1,
        pages = {60--80},
        year = 2015,
        month = {feb},
    }
    [omori-jssst-survey2015]: as a page
  4. Daiki Hoshino and Shinpei Hayashi and Motoshi Saeki: "Automated Grouping of Editing Operations of Source Code" (in Japanese). Computer Software, vol. 31, no. 3, pp. 277-283. aug, 2014.
    ID
    DOI: 10.11309/jssst.31.3_277
    Abstract
    In software configuration management, it is important to separate source code changes into meaningful units before committing them (in short, Task Level Commit). However, developers often commit unrelated code changes in a single transaction. To support Task Level Commit, an existing technique uses an editing history of source code and enables developers to group the editing operations in the history. This paper proposes an automated technique for grouping editing operations in a history based on several criteria including source files, classes, methods, comments, and times editted. We show how our technique reduces developers' separating cost compared with the manual approach.
    BibTeX
    @article{dhoshino-jssst201408,
        author = {Daiki Hoshino and Shinpei Hayashi and Motoshi Saeki},
        title = {Automated Grouping of Editing Operations of Source Code},
        journal = {Computer Software},
        volume = 31,
        number = 3,
        pages = {277--283},
        year = 2014,
        month = {aug},
    }
    [dhoshino-jssst201408]: as a page
  5. Takanori Ugai and Shinpei Hayashi and Motoshi Saeki: "Quality Properties of Goals in an Attributed Goal Graph" (in Japanese). IPSJ Journal, vol. 55, no. 2, pp. 893-908. feb, 2014.
    URL
    http://id.nii.ac.jp/1001/00098488/
    Abstract
    Goal-oriented requirements analysis (GORA) is a promising technique in requirements engineering, especially requirements elicitation. This paper aims at developing a technique to support the improvement of goal graphs, which are resulting artifacts of GORA. We consider that the technique of improving existing goals of lower quality is more realistic rather than that of creating a goal graph of high quality from scratch. To achieve the proposed technique, we define quality properties for each goal formally. Our quality properties result from IEEE Std 830 and past related studies. To define them formally, using attribute values of an attributed goal graph, we formulate predicates for deciding if a goal satisfies a quality property or not. We have implemented a supporting tool to show a requirements analyst the goals which do not satisfy the predicates. Our experiments using the tool show that requirements analysts can efficiently find and modify the qualitatively problematic goals.
    BibTeX
    @article{ugai-ipsjj201402,
        author = {Takanori Ugai and Shinpei Hayashi and Motoshi Saeki},
        title = {Quality Properties of Goals in an Attributed Goal Graph},
        journal = {IPSJ Journal},
        volume = 55,
        number = 2,
        pages = {893--908},
        year = 2014,
        month = {feb},
    }
    [ugai-ipsjj201402]: as a page
  6. Motoshi Saeki, Shinpei Hayashi, Haruhiko Kaiya: "Enhancing Goal-Oriented Security Requirements Analysis Using Common Criteria-Based Knowledge". International Journal of Software Engineering and Knowledge Engineering, vol. 23, no. 5, pp. 695-720. jun, 2013.
    ID
    DOI: 10.1142/S0218194013500174
    Abstract
    Goal-oriented requirements analysis (GORA) is one of the promising techniques to elicit software requirements, and it is natural to consider its application to security requirements analysis. In this paper, we proposed a method for goal-oriented security requirements analysis using security knowledge which is derived from several security targets (STs) compliant to Common Criteria (CC, ISO/IEC 15408). We call such knowledge security ontology for an application domain (SOAD). Three aspects of security such as confidentiality, integrity and availability are included in the scope of our method because the CC addresses these three aspects. We extract security-related concepts such as assets, threats, countermeasures and their relationships from STs, and utilize these concepts and relationships for security goal elicitation and refinement in GORA. The usage of certificated STs as knowledge source allows us to reuse efficiently security-related concepts of higher quality. To realize our proposed method as a supporting tool, we use an existing method GOORE (goal-oriented and ontology-driven requirements elicitation method) combining with SOAD. In GOORE, terms and their relationships in a domain ontology play an important role of semantic processing such as goal refinement and conflict identification. SOAD is defined based on concepts in STs. In contrast with other goal-oriented security requirements methods, the knowledge derived from actual STs contributes to eliciting security requirements in our method. In addition, the relationships among the assets, threats, objectives and security functional requirements can be directly reused for the refinement of security goals. We show an illustrative example to show the usefulness of our method and evaluate the method in comparison with other goal-oriented security requirements analysis methods.
    BibTeX
    @article{saeki-ijseke201306,
        author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya},
        title = {Enhancing Goal-Oriented Security Requirements Analysis Using Common Criteria-Based Knowledge},
        journal = {International Journal of Software Engineering and Knowledge Engineering},
        volume = 23,
        number = 5,
        pages = {695--720},
        year = 2013,
        month = {jun},
    }
    [saeki-ijseke201306]: as a page
  7. Takayuki Omori and Katsuhisa Maruyama and Shinpei Hayashi and Atsushi Sawada: "A Literature Review on Software Evolution Research" (in Japanese). Computer Software, vol. 29, no. 3, pp. 3-28. aug, 2012.
    ID
    DOI: 10.11309/jssst.29.3_3
    Abstract
    Software must be continually evolved to keep up with users’ needs. In this article, we propose a new taxonomy of software evolution. It consists of three perspectives: methods, targets, and objectives of evolution. We also present a literature review on software evolution based on our taxonomy. The result could provide a concrete baseline in discussing research trends and directions in the field of software evolution.
    BibTeX
    @article{omori-jssst-fose2012,
        author = {Takayuki Omori and Katsuhisa Maruyama and Shinpei Hayashi and Atsushi Sawada},
        title = {A Literature Review on Software Evolution Research},
        journal = {Computer Software},
        volume = 29,
        number = 3,
        pages = {3--28},
        year = 2012,
        month = {aug},
    }
    [omori-jssst-fose2012]: as a page
  8. Takanori Ugai and Shinpei Hayashi and Motoshi Saeki: "A Supporting Tool to Identify Stakeholders' Imbalance and Lack in Requirements Analysis" (in Japanese). IPSJ Journal, vol. 53, no. 4, pp. 1448-1460. apr, 2012.
    URL
    http://id.nii.ac.jp/1001/00081787/
    Abstract
    Software requirements elicitation is a cooperative work by stakeholders. It is important for project managers and analysts to understand stakeholder concerns and to identify potential problems such as imbalance or lack of stakeholders. This paper presents a technique and a tool which visualize the strength of stakeholders' interest of concerns on two dimensional screens. The tool generates anchored maps from an attributed goal graph by AGORA, which is an extended version of goal-oriented analysis methods. It has stakeholders' interest to concerns and its degree as the attributes of goals. Additionally an experimental evaluation is described, whose results show the user of the tool could identify imbalance and lack of stakeholders more accurately in shorter time than the case with a table of stakeholders and requirements.
    BibTeX
    @article{ugai-ipsjj201204,
        author = {Takanori Ugai and Shinpei Hayashi and Motoshi Saeki},
        title = {A Supporting Tool to Identify Stakeholders' Imbalance and Lack in Requirements Analysis},
        journal = {IPSJ Journal},
        volume = 53,
        number = 4,
        pages = {1448--1460},
        year = 2012,
        month = {apr},
    }
    [ugai-ipsjj201204]: as a page
  9. Shinpei Hayashi, Daisuke Tanabe, Haruhiko Kaiya, Motoshi Saeki: "Impact Analysis on an Attributed Goal Graph". IEICE Transactions on Information and Systems, vol. E95-D, no. 4, pp. 1012-1020. apr, 2012.
    ID
    DOI: 10.1587/transinf.E95.D.1012
    URL
    http://search.ieice.org/bin/summary.php?id=e95-d_4_1012&category=D&year=2012&lang=E
    Abstract
    Requirements changes frequently occur at any time of a software development process, and their management is a crucial issue to develop software of high quality. Meanwhile, goal-oriented analysis techniques are being put into practice to elicit requirements. In this situation, the change management of goal graphs and its support are necessary. This paper presents a technique related to the change management of goal graphs, realizing impact analysis on a goal graph when its modifications occur. Our impact analysis detects conflicts that arise when a new goal is added, and investigates the achievability of the other goals when an existing goal is deleted. We have implemented a supporting tool for automating the analysis. Two case studies suggested the efficiency of the proposed approach.
    BibTeX
    @article{hayashi-ieicet-kbse2012,
        author = {Shinpei Hayashi and Daisuke Tanabe and Haruhiko Kaiya and Motoshi Saeki},
        title = {Impact Analysis on an Attributed Goal Graph},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E95-D},
        number = 4,
        pages = {1012--1020},
        year = 2012,
        month = {apr},
    }
    [hayashi-ieicet-kbse2012]: as a page
  10. Shinpei Hayashi and Katsuyuki Sekine and Motoshi Saeki: "Interactive Support for Understanding Feature Implementation with Feature Location" (in Japanese). IPSJ Journal, vol. 53, no. 2, pp. 578-589. feb, 2012.
    URL
    http://id.nii.ac.jp/1001/00080669/
    Abstract
    This paper proposes an interactive approach for efficiently understanding a feature implementation by applying feature location (FL). Although existing FL techniques can reduce the understanding cost, it is still an open issue to construct the appropriate inputs for the techniques. In our approach, the inputs of FL are incrementally improved by interactions between users and the FL system. By understanding a code fragment obtained using FL, users can find more appropriate queries from the identifiers in the fragment. Furthermore, the relevance feedback, obtained by partially judging whether or not a code fragment is required to understand, improves the evaluation score of FL. Users can then obtain more accurate results. We have implemented a supporting tool of our approach. Evaluation results using the tool show that our interactive approach is feasible and that it can reduce the understanding cost more effectively than the non-interactive approach.
    BibTeX
    @article{hayashi-ipsjj-se2012,
        author = {Shinpei Hayashi and Katsuyuki Sekine and Motoshi Saeki},
        title = {Interactive Support for Understanding Feature Implementation with Feature Location},
        journal = {IPSJ Journal},
        volume = 53,
        number = 2,
        pages = {578--589},
        year = 2012,
        month = {feb},
    }
    [hayashi-ipsjj-se2012]: as a page
  11. Haruhiko Kaiya and Yuutarou Shimizu and Hirotaka Yasui and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki: "Enhancing Domain Knowledge for Requirements Elicitation with Web Mining" (in Japanese). IPSJ Journal, vol. 53, no. 2, pp. 495-509. feb, 2012.
    URL
    http://id.nii.ac.jp/1001/00080661/
    Abstract
    Software engineers require knowledge about a problem domain when they elicit requirements for a system about the domain. Explicit descriptions about such knowledge such as domain ontology contribute to eliciting such requirements correctly and completely. Methods for eliciting requirements using ontology have been thus proposed, and such ontology is normally developed based on documents and/or experts in the problem domain. However, it is not easy for engineers to elicit requirements correctly and completely only with such domain ontology because they are not normally experts in the problem domain. In this paper, we propose a method and a tool to enhance domain ontology using Web mining. Our method and the tool help engineers to add additional knowledge suitable for them to understand domain ontology. According to our method, candidates of such additional knowledge are gathered from Web pages using keywords in existing domain ontology. The candidates are then prioritized based on the degree of the relationship between each candidate and existing ontology and on the frequency and the distribution of the candidate over Web pages. Engineers finally add new knowledge to existing ontology out of these prioritized candidates. We also show an experiment and its results for confirming enhanced ontology enables engineers to elicit requirements more completely and correctly than existing ontology does.
    BibTeX
    @article{kaiya-ipsjj-se2012,
        author = {Haruhiko Kaiya and Yuutarou Shimizu and Hirotaka Yasui and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki},
        title = {Enhancing Domain Knowledge for Requirements Elicitation with Web Mining},
        journal = {IPSJ Journal},
        volume = 53,
        number = 2,
        pages = {495--509},
        year = 2012,
        month = {feb},
    }
    [kaiya-ipsjj-se2012]: as a page
  12. Rodion Moiseev, Shinpei Hayashi, Motoshi Saeki: "Using Hierarchical Transformation to Generate Assertion Code from OCL Constraints". IEICE Transactions on Information and Systems, vol. E94-D, no. 3, pp. 612-621. mar, 2011.
    ID
    DOI: 10.1587/transinf.E94.D.612
    URL
    http://search.ieice.org/bin/summary.php?id=e94-d_3_612&category=D&year=2011&lang=E
    Abstract
    Object Constraint Language (OCL) is frequently applied in software development for stipulating formal constraints on software models. Its platform-independent characteristic allows for wide usage during the design phase. However, application in platform-specific processes, such as coding, is less obvious because it requires usage of bespoke tools for that platform. In this paper we propose an approach to generate assertion code for OCL constraints for multiple platform specific languages, using a unified framework based on structural similarities of programming languages. We have succeeded in automating the process of assertion code generation for four different languages using our tool. To show effectiveness of our approach in terms of development effort, an experiment was carried out and summarised.
    BibTeX
    @article{rodion-ieicet201103,
        author = {Rodion Moiseev and Shinpei Hayashi and Motoshi Saeki},
        title = {Using Hierarchical Transformation to Generate  Assertion Code from OCL Constraints},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E94-D},
        number = 3,
        pages = {612--621},
        year = 2011,
        month = {mar},
    }
    [rodion-ieicet201103]: as a page
  13. Hiroshi Kazato and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki: "Choosing Software Implementation Technologies Using Bayesian Networks" (in Japanese). IPSJ Journal, vol. 51, no. 9, pp. 1765-1776. sep, 2010.
    Abstract
    It is difficult to estimate how a combination of implementation technologies influences quality attributes on an entire system. In this paper, we propose a technique to choose implementation technologies by modeling casual dependencies between requirements and technoloies probabilistically using Bayesian networks. We have implemented our technique on a Bayesian network tool and applied it to a case study of a business application to show its effectiveness.
    BibTeX
    @article{kazato-ipsjj-se2010,
        author = {Hiroshi Kazato and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki},
        title = {Choosing Software Implementation Technologies Using Bayesian Networks},
        journal = {IPSJ Journal},
        volume = 51,
        number = 9,
        pages = {1765--1776},
        year = 2010,
        month = {sep},
    }
    [kazato-ipsjj-se2010]: as a page
  14. Takashi Kobayashi and Shinpei Hayashi: "Recent Researches for Supporting Software Construction and Maintenance with Data Mining" (in Japanese). Computer Software, vol. 27, no. 3, pp. 13-23. aug, 2010.
    ID
    DOI: 10.11309/jssst.27.3_13
    Abstract
    This paper discusses recent studies on technologies for supporting software construction and maintenance by analyzing various software engineering data. We also introduce typical data mining techniques for analyzing the data.
    BibTeX
    @article{tkobaya-jssst-fose2010,
        author = {Takashi Kobayashi and Shinpei Hayashi},
        title = {Recent Researches for Supporting Software Construction and Maintenance with Data Mining},
        journal = {Computer Software},
        volume = 27,
        number = 3,
        pages = {13--23},
        year = 2010,
        month = {aug},
    }
    [tkobaya-jssst-fose2010]: as a page
  15. Shinpei Hayashi and Yusuke Sasaki and Motoshi Saeki: "Evaluating Alternatives of Source Code Changes with Analytic Hierarchy Process" (in Japanese). Computer Software, vol. 27, no. 2, pp. 118-123. may, 2010.
    ID
    DOI: 10.11309/jssst.27.2_118
    Abstract
    This paper proposes a technique for selecting the most appropriate alternative of source code changes based on the commitment of a software development project by each developer of the project. In the technique, we evaluate the alternative changes by using an evaluation function with integrating multiple software metrics to suppress the influence of each developer’s subjectivity. By regarding the selection of the alternative changes as a multiple criteria decision making, we create the function with Analytic Hierarchy Process. A preliminary evaluation shows the efficiency of the technique.
    BibTeX
    @article{hayashi-jssst-fose2010,
        author = {Shinpei Hayashi and Yusuke Sasaki and Motoshi Saeki},
        title = {Evaluating Alternatives of Source Code Changes with Analytic Hierarchy Process},
        journal = {Computer Software},
        volume = 27,
        number = 2,
        pages = {118--123},
        year = 2010,
        month = {may},
    }
    [hayashi-jssst-fose2010]: as a page
  16. Shinpei Hayashi, Yasuyuki Tsuda, Motoshi Saeki: "Search-Based Refactoring Detection from Source Code Revisions". IEICE Transactions on Information and Systems, vol. E93-D, no. 4, pp. 754-762. apr, 2010.
    ID
    DOI: 10.1587/transinf.E93.D.754
    URL
    http://search.ieice.org/bin/summary.php?id=e93-d_4_754
    Abstract
    This paper proposes a technique for detecting the occurrences of refactoring from source code revisions. In a real software development process, a refactoring operation may sometimes be performed together with other modifications at the same revision. This means that detecting refactorings from the differences between two versions stored in a software version archive is not usually an easy process. In order to detect these impure refactorings, we model the detection within a graph search. Our technique considers a version of a program as a state and a refactoring as a transition between two states. It then searches for the path that approaches from the initial to the final state. To improve the efficiency of the search, we use the source code differences between the current and the final state for choosing the candidates of refactoring to be applied next and estimating the heuristic distance to the final state. Through case studies, we show that our approach is feasible to detect combinations of refactorings.
    BibTeX
    @article{hayashi-ieicet-kbse2010,
        author = {Shinpei Hayashi and Yasuyuki Tsuda and Motoshi Saeki},
        title = {Search-Based Refactoring Detection from Source Code Revisions},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E93-D},
        number = 4,
        pages = {754--762},
        year = 2010,
        month = {apr},
    }
    [hayashi-ieicet-kbse2010]: as a page
  17. Takeshi Obayashi, Shinpei Hayashi, Motoshi Saeki, Hiroyuki Ohta, Kengo Kinoshita: "ATTED-II provides coexpressed gene networks for Arabidopsis". Nucleic Acids Research, vol. 37, DB issue, pp. 987-991. jan, 2009.
    ID
    DOI: 10.1093/nar/gkn807
    URL
    http://www.pubmed.gov/?db=pubmed&cmd=retrieve&list_uids=18953027
    Abstract
    ATTED-II (http://atted.jp) is a database of gene coexpression in Arabidopsis that can be used to design a wide variety of experiments, including the prioritization of genes for functional identification or for studies of regulatory relationships. Here, we report updates of ATTED-II that focus especially on functionalities for constructing gene networks with regard to the following points: (i) introducing a new measure of gene coexpression to retrieve functionally related genes more accurately, (ii) implementing clickable maps for all gene networks for step-by-step navigation, (iii) applying Google Maps API to create a single map for a large network, (iv) including information about protein-protein interactions, (v) identifying conserved patterns of coexpression and (vi) showing and connecting KEGG pathway information to identify functional modules. With these enhanced functions for gene network representation, ATTED-II can help researchers to clarify the functional and regulatory networks of genes in Arabidopsis.
    BibTeX
    @article{obayashi-nar-db2009,
        author = {Takeshi Obayashi and Shinpei Hayashi and Motoshi Saeki and Hiroyuki Ohta and Kengo Kinoshita},
        title = {{ATTED-II} provides coexpressed gene networks for Arabidopsis},
        journal = {Nucleic Acids Research},
        volume = 37,
        number = {DB issue},
        pages = {987--991},
        year = 2009,
        month = {jan},
    }
    [obayashi-nar-db2009]: as a page
  18. Shinpei Hayashi, Junya Katada, Ryota Sakamoto, Takashi Kobayashi, Motoshi Saeki: "Design Pattern Detection by Using Meta Patterns". IEICE Transactions on Information and Systems, vol. E91-D, no. 4, pp. 933-944. apr, 2008.
    ID
    DOI: 10.1093/ietisy/e91-d.4.933
    URL
    http://search.ieice.org/bin/summary.php?id=e91-d_4_933
    Abstract
    One of the approaches to improve program understanding is to extract what kinds of design pattern are used in existing object-oriented software. This paper proposes a technique for efficiently and accurately detecting occurrences of design patterns included in source codes. We use both static and dynamic analyses to achieve the detection with high accuracy. Moreover, to reduce computation and maintenance costs, detection conditions are hierarchically specified based on Pree's meta patterns as common structures of design patterns. The usage of Prolog to represent the detection conditions enables us to easily add and modify them. Finally, we have implemented an automated tool as an Eclipse plug-in and conducted experiments with Java programs. The experimental results show the effectiveness of our approach.
    BibTeX
    @article{hayashi-ieicet-kbse2008,
        author = {Shinpei Hayashi and Junya Katada and Ryota Sakamoto and Takashi Kobayashi and Motoshi Saeki},
        title = {Design Pattern Detection by Using Meta Patterns},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E91-D},
        number = 4,
        pages = {933--944},
        year = 2008,
        month = {apr},
    }
    [hayashi-ieicet-kbse2008]: as a page
  19. Takeshi Obayashi, Shinpei Hayashi, Masayuki Shibaoka, Motoshi Saeki, Hiroyuki Ohta, Kengo Kinoshita: "COXPRESdb: a database of coexpressed gene networks in mammals". Nucleic Acids Research, vol. 36, DB issue, pp. 77-82. jan, 2008.
    ID
    DOI: 10.1093/nar/gkm840
    URL
    http://www.pubmed.gov/?db=pubmed&cmd=retrieve&list_uids=17932064
    Abstract
    A database of coexpressed gene sets can provide valuable information for a wide variety of experimental designs, such as targeting of genes for functional identification, gene regulation and/or protein-protein interactions. Coexpressed gene databases derived from publicly available GeneChip data are widely used in Arabidopsis research, but platforms that examine coexpression for higher mammals are rather limited. Therefore, we have constructed a new database, COXPRESdb (coexpressed gene database) (http://coxpresdb.hgc.jp), for coexpressed gene lists and networks in human and mouse. Coexpression data could be calculated for 19 777 and 21 036 genes in human and mouse, respectively, by using the GeneChip data in NCBI GEO. COXPRESdb enables analysis of the four types of coexpression networks: (i) highly coexpressed genes for every gene, (ii) genes with the same GO annotation, (iii) genes expressed in the same tissue and (iv) user-defined gene sets. When the networks became too big for the static picture on the web in GO networks or in tissue networks, we used Google Maps API to visualize them interactively. COXPRESdb also provides a view to compare the human and mouse coexpression patterns to estimate the conservation between the two species.
    BibTeX
    @article{obayashi-nar-db2008,
        author = {Takeshi Obayashi and Shinpei Hayashi and Masayuki Shibaoka and Motoshi Saeki and Hiroyuki Ohta and Kengo Kinoshita},
        title = {{COXPRESdb}: a database of coexpressed gene networks in mammals},
        journal = {Nucleic Acids Research},
        volume = 36,
        number = {DB issue},
        pages = {77--82},
        year = 2008,
        month = {jan},
    }
    [obayashi-nar-db2008]: as a page
  20. Takeshi Obayashi, Kengo Kinoshita, Kenta Nakai, Masayuki Shibaoka, Shinpei Hayashi, Motoshi Saeki, Daisuke Shibata, Kazuki Saito, Hiroyuki Ohta: "ATTED-II: a database of co-expressed genes and cis elements for identifying co-regulated gene groups in Arabidopsis". Nucleic Acids Research, vol. 35, DB issue, pp. 863-869. jan, 2007.
    ID
    DOI: 10.1093/nar/gkl783
    URL
    http://www.pubmed.gov/?db=pubmed&cmd=retrieve&list_uids=17130150
    Abstract
    Publicly available database of co-expressed gene sets would be a valuable tool for a wide variety of experimental designs, including targeting of genes for functional identification or for regulatory investigation. Here, we report the construction of an Arabidopsis thaliana trans-factor and cis-element prediction database (ATTED-II) that provides co-regulated gene relationships based on co-expressed genes deduced from microarray data and the predicted cis elements. ATTED-II (http://www.atted.bio.titech.ac.jp) includes the following features: (i) lists and networks of co-expressed genes calculated from 58 publicly available experimental series, which are composed of 1388 GeneChip data in A.thaliana; (ii) prediction of cis-regulatory elements in the 200 bp region upstream of the transcription start site to predict co-regulated genes amongst the co-expressed genes; and (iii) visual representation of expression patterns for individual genes. ATTED-II can thus help researchers to clarify the function and regulation of particular genes and gene networks.
    BibTeX
    @article{obayashi-nar-db2007,
        author = {Takeshi Obayashi and Kengo Kinoshita and Kenta Nakai and Masayuki Shibaoka and Shinpei Hayashi and Motoshi Saeki and Daisuke Shibata and Kazuki Saito and Hiroyuki Ohta},
        title = {{ATTED-II}: a database of co-expressed genes and {\it cis} elements for identifying co-regulated gene groups in {\it Arabidopsis}},
        journal = {Nucleic Acids Research},
        volume = 35,
        number = {DB issue},
        pages = {863--869},
        year = 2007,
        month = {jan},
    }
    [obayashi-nar-db2007]: as a page
  21. Shinpei Hayashi, Motoshi Saeki, Masahito Kurihara: "Supporting Refactoring Activities Using Histories of Program Modification". IEICE Transactions on Information and Systems, vol. E89-D, no. 4, pp. 1403-1412. apr, 2006.
    ID
    DOI: 10.1093/ietisy/e89-d.4.1403
    URL
    http://search.ieice.org/bin/summary.php?id=e89-d_4_1403
    Abstract
    Refactoring is one of the promising techniques for improving program design by means of program transformation with preserving behavior, and is widely applied in practice. However, it is difficult for engineers to identify how and where to refactor programs, because proper knowledge and skills of a high order are required of them. In this paper, we propose the technique to instruct how and where to refactor a program by using a sequence of its modifications. We consider that the histories of program modifications reflect developers' intentions, and focusing on them allows us to provide suitable refactoring guides. Our technique can be automated by storing the correspondence of modification patterns to suitable refactoring operations. By implementing an automated supporting tool, we show its feasibility. The tool is implemented as a plug-in for Eclipse IDE. It selects refactoring operations by matching between a sequence of program modifications and modification patterns.
    BibTeX
    @article{hayashi-ieicet-kbse2006,
        author = {Shinpei Hayashi and Motoshi Saeki and Masahito Kurihara},
        title = {Supporting Refactoring Activities Using Histories of Program Modification},
        journal = {IEICE Transactions on Information and Systems},
        volume = {E89-D},
        number = 4,
        pages = {1403--1412},
        year = 2006,
        month = {apr},
    }
    [hayashi-ieicet-kbse2006]: as a page

Research Talks Presented in International Conferences, Workshops, or Symposia

  1. Haruhiko Kaiya, Shinpei Ogata, Shinpei Hayashi, Motoshi Saeki, Takao Okubo, Nobukazu Yoshioka, Hironori Washizaki, Atsuo Hazeyama: "Finding Potential Threats in Several Security Targets for Eliciting Security Requirements". In Proceedings of the 10th International Multi-Conference on Computing in the Global Information Technology (ICCGI 2015), pp. 83-92. St. Julians, Malta, oct, 2015.
    URL
    https://www.thinkmind.org/download.php?articleid=iccgi_2015_4_10_10050
    Abstract
    Threats to existing systems help requirements analysts to elicit security requirements for a new system similar to such systems because security requirements specify how to protect the system against threats and similar systems require similar means for protection. We propose a method of finding potential threats that can be used for eliciting security requirements for such a system. The method enables analysts to find additional security requirements when they have already elicited one or a few threats. The potential threats are derived from several security targets (STs) in the Common Criteria. An ST contains knowledge related to security requirements such as threats and objectives. It also contains their explicit relationships. In addition, individual objectives are explicitly related to the set of means for protection, which are commonly used in any STs. Because we focus on such means to find potential threats, our method can be applied to STs written in any languages, such as English or French. We applied and evaluated our method to three different domains. In our evaluation, we enumerated all threat pairs in each domain. We then predicted whether a threat and another in each pair respectively threaten the same requirement according to the method. The recall of the prediction was more than 70% and the precision was 20 to 40% in three domains.
    BibTeX
    @inproceedings{kaiya-iccgi2015,
        author = {Haruhiko Kaiya and Shinpei Ogata and Shinpei Hayashi and Motoshi Saeki and Takao Okubo and Nobukazu Yoshioka and Hironori Washizaki and Atsuo Hazeyama},
        title = {Finding Potential Threats in Several Security Targets for Eliciting Security Requirements},
        booktitle = {Proceedings of the 10th International Multi-Conference on Computing in the Global Information Technology},
        pages = {83--92},
        year = 2015,
        month = {oct},
    }
    [kaiya-iccgi2015]: as a page
  2. Tatsuya Abe, Shinpei Hayashi, Motoshi Saeki: "Modeling and Utilizing Security Knowledge for Eliciting Security Requirements". In Proceedings of the 2nd International Workshop on Conceptual Modeling in Requirements and Business Analysis (MReBa 2015), co-located with ER 2015, pp. 236-247. Stockholm, Sweden, oct, 2015.
    ID
    DOI: 10.1007/978-3-319-25747-1_24
    Abstract
    In order to develop secure information systems with less development cost, it is important to elicit the requirements to security functions (simply security requirements) as early in their development process as possible. To achieve it, accumulated knowledge of threats and their objectives obtained from practical experiences is useful, and the technique to support the elicitation of security requirements utilizing this knowledge should be developed. In this paper, we present the technique for security requirements elicitation using practical knowledge of threats, their objectives and security functions realizing the objectives, which is extracted from Security Target documents compliant to the standard Common Criteria. We show the usefulness of our approach with several case studies.
    BibTeX
    @inproceedings{abe-mreba2015,
        author = {Tatsuya Abe and Shinpei Hayashi and Motoshi Saeki},
        title = {Modeling and Utilizing Security Knowledge for Eliciting Security Requirements},
        booktitle = {Proceedings of the 2nd International Workshop on Conceptual Modeling in Requirements and Business Analysis},
        pages = {236--247},
        year = 2015,
        month = {oct},
    }
    [abe-mreba2015]: as a page
  3. Jumpei Matsuda, Shinpei Hayashi, Motoshi Saeki: "Hierarchical Categorization of Edit Operations for Separately Committing Large Refactoring Results". In Proceedings of the 14th International Workshop on Principles of Software Evolution (IWPSE 2015), co-located with ESEC/FSE 2015, pp. 19-27. Bergamo, Italy, aug, 2015.
    ID
    DOI: 10.1145/2804360.2804363
    Abstract
    In software configuration management using a version control system, developers have to follow the commit policy of the project. However, preparing changes according to the policy are sometimes cumbersome and time-consuming, in particular when applying large refactoring consisting of multiple primitive refactoring instances. In this paper, we propose a technique for re-organizing changes by recording editing operations of source code. Editing operations including refactoring operations are hierarchically managed based on their types provided by an integrated development environment. Using the obtained hierarchy, developers can easily configure the granularity of changes and obtain the resulting changes based on the configured granularity. We confirmed the feasibility of the technique by applying it to the recorded changes in a large refactoring process.
    BibTeX
    @inproceedings{jmatsu-iwpse2015,
        author = {Jumpei Matsuda and Shinpei Hayashi and Motoshi Saeki},
        title = {Hierarchical Categorization of Edit Operations for Separately Committing Large Refactoring Results},
        booktitle = {Proceedings of the 14th International Workshop on Principles of Software Evolution},
        pages = {19--27},
        year = 2015,
        month = {aug},
    }
    [jmatsu-iwpse2015]: as a page
  4. Ryotaro Nakamura, Yu Negishi, Shinpei Hayashi, Motoshi Saeki: "Terminology Matching of Requirements Specification Documents and Regulations for Consistency Checking". In Proceedings of the 8th International Workshop on Requirements Engineering and Law (RELAW 2015), co-located with RE'15, pp. 10-18. Ottawa, Canada, aug, 2015.
    ID
    DOI: 10.1109/RELAW.2015.7330206
    Abstract
    To check the consistency between requirements specification documents and regulations by using a model checking technique, requirements analysts generate inputs to the model checker, i.e., state transition machines from the documents and logical formulas from the regulatory statements to be verified as properties. During these generation processes, to make the logical formulas semantically correspond to the state transition machine, analysts should take terminology matching where they look for the words in the requirements document having the same meaning as the words in the regulatory statements and unify the semantically same words. In this paper, by using case grammar approach, we propose an automated technique to reason the meaning of words in requirements specification documents by means of cooccurrence constraints on words in case frames, and to generate from regulatory statements the logical formulas where the words are unified to the words of the requirements documents. We have a feasibility study of our proposal with two case studies.
    BibTeX
    @inproceedings{nakamura-relaw2015,
        author = {Ryotaro Nakamura and Yu Negishi and Shinpei Hayashi and Motoshi Saeki},
        title = {Terminology Matching of Requirements Specification Documents and Regulations for Consistency Checking},
        booktitle = {Proceedings of the 8th International Workshop on Requirements Engineering and Law},
        pages = {10--18},
        year = 2015,
        month = {aug},
    }
    [nakamura-relaw2015]: as a page
  5. Wataru Inoue, Shinpei Hayashi, Haruhiko Kaiya, Motoshi Saeki: "Multi-Dimensional Goal Refinement in Goal-Oriented Requirements Engineering". In Proceedings of the 10th International Conference on Software Engineering and Applications (ICSOFT-EA 2015), pp. 185-195. Colmar, Alsace, France, jul, 2015.
    ID
    DOI: 10.5220/0005499301850195
    Abstract
    In this paper, we propose a multi-dimensional extension of goal graphs in goal-oriented requirements engineering in order to support the understanding the relations between goals, i.e., goal refinements. Goals specify multiple concerns such as functions, strategies, and non-functions, and they are refined into sub goals from mixed views of these concerns. This intermixture of concerns in goals makes it difficult for a requirements analyst to understand and maintain goal graphs. In our approach, a goal graph is put in a multi-dimensional space, a concern corresponds to a coordinate axis in this space, and goals are refined into sub goals referring to the coordinates. Thus, the meaning of a goal refinement is explicitly provided by means of the coordinates used for the refinement. By tracing and focusing on the coordinates of goals, requirements analysts can understand goal refinements and modify unsuitable ones. We have developed a supporting tool and made an exploratory experiment to evaluate the usefulness of our approach.
    BibTeX
    @inproceedings{inouew-icsoft2015,
        author = {Wataru Inoue and Shinpei Hayashi and Haruhiko Kaiya and Motoshi Saeki},
        title = {Multi-Dimensional Goal Refinement in Goal-Oriented Requirements Engineering},
        booktitle = {Proceedings of the 10th International Conference on Software Engineering and Applications},
        pages = {185--195},
        year = 2015,
        month = {jul},
    }
    [inouew-icsoft2015]: as a page
  6. Yoshiki Higo, Akio Ohtani, Shinpei Hayashi, Hideaki Hata, Shinji Kusumoto: "Toward Reusing Code Changes". In Proceedings of the 12th Working Conference on Mining Software Repositories (MSR 2015), pp. 372-376. Florence, Italy, may, 2015.
    ID
    DOI: 10.1109/MSR.2015.43
    Abstract
    Existing techniques have succeeded to help developers implement new code. However, they are insufficient to help to change existing code. Previous studies have proposed techniques to support bug fixes but other kinds of code changes such as function enhancements and refactorings are not supported by them. In this paper, we propose a novel system that helps developers change existing code. Unlike existing techniques, our system can support any kinds of code changes if similar code changes occurred in the past. Our research is still on very early stage and we have not have any implementation or any prototype yet. This paper introduces our research purpose, an outline of our system, and how our system is different from existing techniques.
    BibTeX
    @inproceedings{higo-msr2015,
        author = {Yoshiki Higo and Akio Ohtani and Shinpei Hayashi and Hideaki Hata and Shinji Kusumoto},
        title = {Toward Reusing Code Changes},
        booktitle = {Proceedings of the 12th Working Conference on Mining Software Repositories},
        pages = {372--376},
        year = 2015,
        month = {may},
    }
    [higo-msr2015]: as a page
  7. Shinpei Hayashi, Daiki Hoshino, Jumpei Matsuda, Motoshi Saeki, Takayuki Omori, Katsuhisa Maruyama: "Historef: A Tool for Edit History Refactoring". In Proceedings of the 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering (SANER 2015), Tool Demo Track, pp. 469-473. Montréal, Canada, mar, 2015.
    ID
    DOI: 10.1109/SANER.2015.7081858
    Abstract
    This paper presents Historef, a tool for automatin edit history refactoring on Eclipse IDE for Java programs. The aim of our history refactorings is to improve the understandability and/or usability of the history without changing its whole effect. Historef enables us to apply history refactorings to the recorded edit history in the middle of the source code editing process by a developer. By using our integrated tool, developers can commit the refactored edits into underlying SCM repository after applying edit history refactorings so that they are easy to manage their changes based on the performed edits.
    BibTeX
    @inproceedings{hayashi-saner2015,
        author = {Shinpei Hayashi and Daiki Hoshino and Jumpei Matsuda and Motoshi Saeki and Takayuki Omori and Katsuhisa Maruyama},
        title = {Historef: A Tool for Edit History Refactoring},
        booktitle = {Proceedings of the 22nd IEEE International Conference on Software Analysis, Evolution, and Reengineering},
        pages = {469--473},
        year = 2015,
        month = {mar},
    }
    [hayashi-saner2015]: as a page
  8. Shinpei Hayashi, Takashi Ishio, Hiroshi Kazato, Tsuyoshi Oshima: "Toward Understanding How Developers Recognize Features in Source Code from Descriptions". In Proceedings of the 9th International Workshop on Advanced Modularization Techniques (AOAsia/Pacific 2014), co-located with FSE 2014, pp. 1-3. Hong Kong, China, nov, 2014.
    ID
    DOI: 10.1145/2666358.2666578
    Abstract
    A basic clue of feature location available to developers is a description of a feature written in a natural language. However, a description of a feature does not clearly specify the boundary of the feature, while developers tend to locate the feature precisely by excluding marginal modules that are likely outside of the boundary. This paper addresses a question: does a clearer description of a feature enable developers to recognize the same sets of modules as relevant to the feature? Based on the conducted experiment with subjects, we conclude that different descriptions lead to a different set of modules.
    Slide
    BibTeX
    @inproceedings{hayashi-aoasia2014,
        author = {Shinpei Hayashi and Takashi Ishio and Hiroshi Kazato and Tsuyoshi Oshima},
        title = {Toward Understanding How Developers Recognize Features in Source Code from Descriptions},
        booktitle = {Proceedings of the 9th International Workshop on Advanced Modularization Techniques},
        pages = {1--3},
        year = 2014,
        month = {nov},
    }
    [hayashi-aoasia2014]: as a page
  9. Shinpei Hayashi, Takuto Yanagida, Motoshi Saeki, Hidenori Mimura: "Class Responsibility Assignment as Fuzzy Constraint Satisfaction". In Proceedings of the 6th International Workshop on Empirical Software Engineering in Practice (IWESEP 2014), pp. 19-24. Osaka, Japan, nov, 2014.
    ID
    DOI: 10.1109/IWESEP.2014.13
    Abstract
    We formulate the class responsibility assignment (CRA) problem as the fuzzy constraint satisfaction problem (FCSP) for automating CRA of high quality. Responsibilities are contracts or obligations of objects that they should assume, by aligning them to classes appropriately, quality designs realize. Typical conditions of a desirable design are having a low coupling between highly cohesive classes. However, because of a trade-off among such conditions, solutions that satisfy the conditions moderately are desired, and computer assistance is needed. Additionally, if we have an initial assignment, the improved one by our technique should keep the original assignment as much as possible because it involves with the intention of human designers. We represent such conditions as fuzzy constraints, and formulate CRA as FCSP. That enables us to apply common FCSP solvers to the problem and to derive solution representing a CRA. The conducted preliminary evaluation indicates the effectiveness of our technique.
    Slide
    BibTeX
    @inproceedings{hayashi-iwesep2014,
        author = {Shinpei Hayashi and Takuto Yanagida and Motoshi Saeki and Hidenori Mimura},
        title = {Class Responsibility Assignment as Fuzzy Constraint Satisfaction},
        booktitle = {Proceedings of the 6th International Workshop on Empirical Software Engineering in Practice},
        pages = {19--24},
        year = 2014,
        month = {nov},
    }
    [hayashi-iwesep2014]: as a page
  10. Katsuhisa Maruyama, Takayuki Omori, Shinpei Hayashi: "A Visualization Tool Recording Historical Data of Program Comprehension Tasks". In Proceedings of the 22nd International Conference on Program Comprehension (ICPC 2014), Tool Demo Track, pp. 207-211. Hyderabad, India, jun, 2014.
    ID
    DOI: 10.1145/2597008.2597802
    Abstract
    Software visualization has become a major technique in program comprehension. Although many tools visualize the structure, behavior, and evolution of a program, they have no concern with how a tool user has understood it. Moreover, they miss the stuff the user has left through trial-and-error processes of his/her program comprehension task. This paper presents a source code visualization tool called CodeForest. It uses a forest metaphor to depict source code of Java programs. Each tree represents a class within the program and the collection of trees constitutes a three-dimensional forest. CodeForest helps a user to try a large number of combinations of mapping of software metrics on visual parameters. Moreover, it provides two new types of support: leaving notes that memorize the current understanding and insight along with visualized objects, and automatically recording a user's actions under understanding. The left notes and recorded actions might be used as historical data that would be hints accelerating the current comprehension task.
    BibTeX
    @inproceedings{maruyama-icpc2014,
        author = {Katsuhisa Maruyama and Takayuki Omori and Shinpei Hayashi},
        title = {A Visualization Tool Recording Historical Data of Program Comprehension Tasks},
        booktitle = {Proceedings of the 22nd International Conference on Program Comprehension},
        pages = {207--211},
        year = 2014,
        month = {jun},
    }
    [maruyama-icpc2014]: as a page
  11. Hiroshi Kazato, Shinpei Hayashi, Tsuyoshi Oshima, Shunsuke Miyata, Takashi Hoshino, Motoshi Saeki: "Extracting and Visualizing Implementation Structure of Features". In Proceedings of the 20th Asia-Pacific Software Engineering Conference (APSEC 2013), pp. 476-484. Bangkok, Thailand, dec, 2013.
    ID
    DOI: 10.1109/APSEC.2013.69
    Abstract
    Feature location is an activity to identify correspondence between features in a system and program elements in source code. After a feature is located, developers need to understand implementation structure around the location from static and/or behavioral points of view. This paper proposes a semi-automatic technique both for locating features and exposing their implementation structures in source code, using a combination of dynamic analysis and two data analysis techniques, sequential pattern mining and formal concept analysis. We have implemented our technique in a supporting tool and applied it to an example of a web application. The result shows that the proposed technique is not only feasible but helpful to understand implementation of features just after they are located.
    BibTeX
    @inproceedings{kazato-apsec2013,
        author = {Hiroshi Kazato and Shinpei Hayashi and Tsuyoshi Oshima and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki},
        title = {Extracting and Visualizing Implementation Structure of Features},
        booktitle = {Proceedings of the 20th Asia-Pacific Software Engineering Conference},
        pages = {476--484},
        year = 2013,
        month = {dec},
    }
    [kazato-apsec2013]: as a page
  12. Tatsuya Abe, Shinpei Hayashi, Motoshi Saeki: "Modeling Security Threat Patterns to Derive Negative Scenarios". In Proceedings of the 20th Asia-Pacific Software Engineering Conference (APSEC 2013), pp. 58-66. Bangkok, Thailand, dec, 2013.
    ID
    DOI: 10.1109/APSEC.2013.19
    Abstract
    The elicitation of security requirements is a crucial issue to develop secure business processes and information systems of higher quality. Although we have several methods to elicit security requirements, most of them do not provide sufficient supports to identify security threats. Since threats do not occur so frequently, like exceptional events, it is much more difficult to determine the potentials of threats exhaustively rather than identifying normal behavior of a business process. To reduce this difficulty, accumulated knowledge of threats obtained from practical setting is necessary. In this paper, we present the technique to model knowledge of threats as patterns by deriving the negative scenarios that realize threats and to utilize them during business process modeling. The knowledge is extracted from Security Target documents, based on the international Common Criteria Standard, and the patterns are described with transformation rules on sequence diagrams. In our approach, an analyst composes normal scenarios of a business process with sequence diagrams, and the threat patterns matched to them derives negative scenarios. Our approach has been demonstrated on several examples, to show its practical application.
    BibTeX
    @inproceedings{abe-apsec2013,
        author = {Tatsuya Abe and Shinpei Hayashi and Motoshi Saeki},
        title = {Modeling Security Threat Patterns to Derive Negative Scenarios},
        booktitle = {Proceedings of the 20th Asia-Pacific Software Engineering Conference},
        pages = {58--66},
        year = 2013,
        month = {dec},
    }
    [abe-apsec2013]: as a page
  13. Takashi Ishio, Shinpei Hayashi, Hiroshi Kazato, Tsuyoshi Oshima: "On the Effectiveness of Accuracy of Automated Feature Location Technique". In Proceedings of the 20th Working Conference on Reverse Engineering (WCRE 2013), pp. 381-390. Koblenz-Landau, Germany, oct, 2013.
    ID
    DOI: 10.1109/WCRE.2013.6671313
    Abstract
    Automated feature location techniques have been proposed to extract program elements that are likely to be relevant to a given feature. A more accurate result is expected to enable developers to perform more accurate feature location. However, several experiments assessing traceability recovery have shown that analysts cannot utilize an accurate traceability matrix for their tasks. Because feature location deals with a certain type of traceability links, it is an important question whether the same phenomena are visible in feature location or not. To answer that question, we have conducted a controlled experiment. We have asked 20 subjects to locate features using lists of methods of which the accuracy is controlled artificially. The result differs from the traceability recovery experiments. Subjects given an accurate list would be able to locate a feature more accurately. However, subjects could not locate the complete implementation of features in 83% of tasks. Results show that the accuracy of automated feature location techniques is effective, but it might be insufficient for perfect feature location.
    BibTeX
    @inproceedings{ishio-wcre2013,
        author = {Takashi Ishio and Shinpei Hayashi and Hiroshi Kazato and Tsuyoshi Oshima},
        title = {On the Effectiveness of Accuracy of Automated Feature Location Technique},
        booktitle = {Proceedings of the 20th Working Conference on Reverse Engineering},
        pages = {381--390},
        year = 2013,
        month = {oct},
    }
    [ishio-wcre2013]: as a page
  14. Shinpei Hayashi, Sirinut Thangthumachit, Motoshi Saeki: "REdiffs: Refactoring-Aware Difference Viewer for Java". In Proceedings of the 20th Working Conference on Reverse Engineering (WCRE 2013), Tool Demonstrations Track, pp. 487-488. Koblenz-Landau, Germany, oct, 2013.
    ID
    DOI: 10.1109/WCRE.2013.6671331
    Abstract
    Comparing and understanding differences between old and new versions of source code are necessary in various software development situations. However, if changes are tangled with refactorings in a single revision, then the resulting source code differences are more complicated. We propose an interactive difference viewer which enables us to separate refactoring effects from source code differences for improving the understandability of the differences.
    BibTeX
    @inproceedings{hayashi-wcre2013,
        author = {Shinpei Hayashi and Sirinut Thangthumachit and Motoshi Saeki},
        title = {REdiffs: Refactoring-Aware Difference Viewer for Java},
        booktitle = {Proceedings of the 20th Working Conference on Reverse Engineering},
        pages = {487--488},
        year = 2013,
        month = {oct},
    }
    [hayashi-wcre2013]: as a page
  15. Hiroshi Kazato, Shinpei Hayashi, Takashi Kobayashi, Tsuyoshi Oshima, Satoshi Okada, Shunsuke Miyata, Takashi Hoshino, Motoshi Saeki: "Incremental Feature Location and Identification in Source Code". In Proceedings of the 17th European Conference on Software Maintenance and Reengineering (CSMR 2013), ERA Track, pp. 371-374. Genova, Italy, mar, 2013.
    ID
    DOI: 10.1109/CSMR.2013.52
    Abstract
    Feature location (FL) in source code is an important task for program understanding. Existing dynamic FL techniques depend on sufficient scenarios for exercising the features to be located. However, it is difficult to prepare such scenarios because it involves a correct understanding of the features. This paper proposes an incremental technique for refining the identification of features integrated with the existing FL technique using formal concept analysis. In our technique, we classify the differences of static and dynamic dependencies of method invocations based on their relevance to the identified features. According to the classification, the technique suggests method invocations to exercise unexplored part of the features. An application example indicates the effectiveness of the approach.
    Slide
    BibTeX
    @inproceedings{kazato-csmr2013,
        author = {Hiroshi Kazato and Shinpei Hayashi and Takashi Kobayashi and Tsuyoshi Oshima and Satoshi Okada and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki},
        title = {Incremental Feature Location and Identification in Source Code},
        booktitle = {Proceedings of the 17th European Conference on Software Maintenance and Reengineering},
        pages = {371--374},
        year = 2013,
        month = {mar},
    }
    [kazato-csmr2013]: as a page
  16. Haruhiko Kaiya, Shunsuke Morita, Shinpei Ogata, Kenji Kaijiri, Shinpei Hayashi, Motoshi Saeki: "Model Transformation Patterns for Introducing Suitable Information Systems". In Proceedings of the 19th Asia-Pacific Software Engineering Conference (APSEC 2012), pp. 434-439. Hong Kong, dec, 2012.
    ID
    DOI: 10.1109/APSEC.2012.52
    Abstract
    When information systems are introduced in a social setting such as a business, the systems will give bad and good impacts on stakeholders in the setting. Requirements analysts have to predict such impacts in advance because stakeholders cannot decide whether the systems are really suitable for them without such prediction. In this paper, we propose a method based on model transformation patterns for introducing suitable information systems. We use metrics of a model to predict whether a system introduction is suitable for a social setting. Through a case study, we show our method can avoid an introduction of a system, which was actually bad for some stakeholders. In the case study, we use a strategic dependency model in i* to specify the model of systems and stakeholders, and attributed graph grammar for model transformation. We focus on the responsibility and the satisfaction of stakeholders as the criteria for suitability about systems introduction in this case study.
    BibTeX
    @inproceedings{kaiya-apsec2012,
        author = {Haruhiko Kaiya and Shunsuke Morita and Shinpei Ogata and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki},
        title = {Model Transformation Patterns for Introducing Suitable Information Systems},
        booktitle = {Proceedings of the 19th Asia-Pacific Software Engineering Conference},
        pages = {434--439},
        year = 2012,
        month = {dec},
    }
    [kaiya-apsec2012]: as a page
  17. Teppei Kato, Shinpei Hayashi, Motoshi Saeki: "Cutting a Method Call Graph for Supporting Feature Location". In Proceedings of the 4th International Workshop on Empirical Software Engineering in Practice (IWESEP 2012), pp. 55-57. Osaka, Japan, oct, 2012.
    ID
    DOI: 10.1109/IWESEP.2012.17
    Abstract
    This paper proposes a technique for locating the implementation of features by combining techniques of a graph cut and a formal concept analysis based on methods and scenarios.
    BibTeX
    @inproceedings{kato-iwesep2012,
        author = {Teppei Kato and Shinpei Hayashi and Motoshi Saeki},
        title = {Cutting a Method Call Graph for Supporting Feature Location},
        booktitle = {Proceedings of the 4th International Workshop on Empirical Software Engineering in Practice},
        pages = {55--57},
        year = 2012,
        month = {oct},
    }
    [kato-iwesep2012]: as a page
  18. Shinpei Hayashi, Takayuki Omori, Teruyoshi Zenmyo, Katsuhisa Maruyama, Motoshi Saeki: "Refactoring Edit History of Source Code". In Proceedings of the 28th IEEE International Conference on Software Maintenance (ICSM 2012), ERA Track, pp. 617-620. Riva del Garda, Trento, Italy, sep, 2012.
    ID
    DOI: 10.1109/ICSM.2012.6405336
    Abstract
    This paper proposes a concept for refactoring an edit history of source code and a technique for its automation. The aim of our history refactoring is to improve the clarity and usefulness of the history without changing its overall effect. We have defined primitive history refactorings including their preconditions and procedures, and large refactorings composed of these primitives. Moreover, we have implemented a supporting tool that automates the application of history refactorings in the middle of a source code editing process. Our tool enables developers to pursue some useful applications using history refactorings such as task level commit from an entangled edit history and selective undo of past edit operations.
    Slide
    BibTeX
    @inproceedings{hayashi-icsm2012,
        author = {Shinpei Hayashi and Takayuki Omori and Teruyoshi Zenmyo and Katsuhisa Maruyama and Motoshi Saeki},
        title = {Refactoring Edit History of Source Code},
        booktitle = {Proceedings of the 28th IEEE International Conference on Software Maintenance},
        pages = {617--620},
        year = 2012,
        month = {sep},
    }
    [hayashi-icsm2012]: as a page
  19. Katsuhisa Maruyama, Eijiro Kitsu, Takayuki Omori, Shinpei Hayashi: "Slicing and Replaying Code Change History". In Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering (ASE 2012), Short paper session, pp. 246-249. Essen, Germany, sep, 2012.
    ID
    DOI: 10.1145/2351676.2351713
    Abstract
    Change-aware development environments have recently become feasible and reasonable. These environments can automatically record fine-grained code changes on a program and allow programmers to replay the recorded changes in chronological order. However, they do not always need to replay all the code changes to investigate how a particular entity of the program has been changed. Therefore, they often skip several code changes of no interest. This skipping action is an obstacle that makes many programmers hesitate in using existing replaying tools. This paper proposes a slicing mechanism that can extract only code changes necessary to construct a particular class member of a Java program from the whole history of past code changes. In this mechanism, fine-grained code changes are represented by edit operations recorded on source code of a program. The paper also presents a running tool that implements the proposed slicing and replays its resulting slices. With this tool, programmers can avoid replaying edit operations nonessential to the construction of class members they want to understand.
    BibTeX
    @incollection{maruyama-ase2012,
        author = {Katsuhisa Maruyama and Eijiro Kitsu and Takayuki Omori and Shinpei Hayashi},
        title = {Slicing and Replaying Code Change History},
        booktitle = {Proceedings of the 27th IEEE/ACM International Conference on Automated Software Engineering},
        pages = {246--249},
        year = 2012,
        month = {sep},
    }
    [maruyama-ase2012]: as a page
  20. Haruhiko Kaiya, Shunsuke Morita, Kenji Kaijiri, Shinpei Hayashi, Motoshi Saeki: "Facilitating Business Improvement by Information Systems using Model Transformation and Metrics". In Proceedings of the CAiSE'12 Forum at the 24th International Conference on Advanced Information Systems Engineering (CAiSE 2012), pp. 106-113. Gdańsk, Poland, jun, 2012.
    URL
    http://ceur-ws.org/Vol-855/paper13.pdf
    Abstract
    We propose a method to explore how to improve business by introducing information systems. We use a meta-modeling technique to specify the business itself and its metrics. The metrics are defined based on the structural information of the business model, so that they can help us to identify whether the business is good or not with respect to several different aspects. We also use a model transformation technique to specify an idea of the business improvement. The metrics help us to predict whether the improvement idea makes the business better or not. We use strategic dependency (SD) models in i* to specify the business, and attributed graph grammar (AGG) for the model transformation.
    BibTeX
    @inproceedings{kaiya-caise2012,
        author = {Haruhiko Kaiya and Shunsuke Morita and Kenji Kaijiri and Shinpei Hayashi and Motoshi Saeki},
        title = {Facilitating Business Improvement by Information Systems using Model Transformation and Metrics},
        booktitle = {Proceedings of the CAiSE'12 Forum at the 24th International Conference on Advanced Information Systems Engineering},
        pages = {106--113},
        year = 2012,
        month = {jun},
    }
    [kaiya-caise2012]: as a page
  21. Hiroshi Kazato, Shinpei Hayashi, Satoshi Okada, Shunsuke Miyata, Takashi Hoshino, Motoshi Saeki: "Toward Structured Location of Features". In Proceedings of the 20th IEEE International Conference on Program Comprehension (ICPC 2012), Poster Session, pp. 255-256. Passau, Germany, jun, 2012.
    ID
    DOI: 10.1109/ICPC.2012.6240497
    Abstract
    This paper proposes structured location, a semiautomatic technique and its supporting tool both for locating features and exposing their structures in source code, using a combination of dynamic analysis, sequential pattern mining and formal concept analysis.
    Slide
    BibTeX
    @inproceedings{kazato-icpc2012,
        author = {Hiroshi Kazato and Shinpei Hayashi and Satoshi Okada and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki},
        title = {Toward Structured Location of Features},
        booktitle = {Proceedings of the 20th IEEE International Conference on Program Comprehension},
        pages = {255--256},
        year = 2012,
        month = {jun},
    }
    [kazato-icpc2012]: as a page
  22. Hiroshi Kazato, Shinpei Hayashi, Satoshi Okada, Shunsuke Miyata, Takashi Hoshino, Motoshi Saeki: "Feature Location for Multi-Layer System Based on Formal Concept Analysis". In Proceedings of the 16th European Conference on Software Maintenance and Reengineering (CSMR 2012), pp. 429-434. Szeged, Hungary, mar, 2012.
    ID
    DOI: 10.1109/CSMR.2012.54
    Abstract
    Locating features in software composed of multiple layers is a challenging problem because we have to find program elements distributed over layers, which still work together to constitute a feature. This paper proposes a semi-automatic technique to extract correspondence between features and program elements among layers. By merging execution traces of each layer to feed into formal concept analysis, collaborative program elements are grouped into formal concepts and tied with a set of execution scenarios. We applied our technique to an example of web application composed of three layers. The result indicates that our technique is not only feasible but promising to promote program understanding in a more realistic context.
    Slide
    BibTeX
    @inproceedings{kazato-csmr2012,
        author = {Hiroshi Kazato and Shinpei Hayashi and Satoshi Okada and Shunsuke Miyata and Takashi Hoshino and Motoshi Saeki},
        title = {Feature Location for Multi-Layer System Based on Formal Concept Analysis},
        booktitle = {Proceedings of the 16th European Conference on Software Maintenance and Reengineering},
        pages = {429--434},
        year = 2012,
        month = {mar},
    }
    [kazato-csmr2012]: as a page
  23. Sirinut Thangthumachit, Shinpei Hayashi, Motoshi Saeki: "Understanding Source Code Differences by Separating Refactoring Effects". In Proceedings of the 18th Asia Pacific Software Engineering Conference (APSEC 2011), pp. 339-347. Ho Chi Minh city, Vietnam, dec, 2011.
    ID
    DOI: 10.1109/APSEC.2011.47
    Abstract
    Comparing and understanding differences between old and new versions of source code are necessary in various software development situations. However, if refactoring is applied between those versions, then the source code differences are more complicated, and understanding them becomes more difficult. Although many techniques for extracting refactoring effects from the differences have been studied, it is necessary to exclude the extracted refactorings' effects and reconstruct the differences for meaningful and understandable ones with no refactoring effect. As described in this paper, we propose a novel technique to address this difficulty. Using our technique, we extract the refactoring effects and then apply them to the old version of source code to produce the differences without refactoring effects. We also implemented a support tool that helps separate refactorings automatically. An evaluation of open source software showed that our tool is applicable to all target refactorings. Our technique is therefore useful in real situations. Evaluation testing also demonstrated that the approach reduced the code differences more than 21\%, on average, and that developers can understand more changes from the differences using our approach than when using the original one in the same limited time.
    Slide
    BibTeX
    @inproceedings{zui-apsec2011,
        author = {Sirinut Thangthumachit and Shinpei Hayashi and Motoshi Saeki},
        title = {Understanding Source Code Differences by Separating Refactoring Effects},
        booktitle = {Proceedings of the 18th Asia Pacific Software Engineering Conference},
        pages = {339--347},
        year = 2011,
        month = {dec},
    }
    [zui-apsec2011]: as a page
  24. Motohiro Akiyama, Shinpei Hayashi, Takashi Kobayashi, Motoshi Saeki: "Supporting Design Model Refactoring for Improving Class Responsibility Assignment". In Proceedings of the ACM/IEEE 14th International Conference on Model Driven Engineering Languages and Systems (MODELS 2011), Lecture Notes in Computer Science, vol. 6981, pp. 455-469. Wellington, New Zealand, oct, 2011.
    ID
    DOI: 10.1007/978-3-642-24485-8_33
    Abstract
    Although a responsibility driven approach in object oriented analysis and design methodologies is promising, the assignment of the identified responsibilities to classes (simply, class responsibility assignment: CRA) is a crucial issue to achieve design of higher quality. The GRASP by Larman is a guideline for CRA and is being put into practice. However, since it is described in an informal way using a natural language, its successful usage greatly relies on designers' skills. This paper proposes a technique to represent GRASP formally and to automate appropriate CRA based on them. Our computerized tool automatically detects inappropriate CRA and suggests alternatives of appropriate CRAs to designers so that they can improve a CRA based on the suggested alternatives. We made preliminary experiments to show the usefulness of our tool.
    Slide
    BibTeX
    @inproceedings{akiyama-models2011,
        author = {Motohiro Akiyama and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki},
        title = {Supporting Design Model Refactoring for Improving Class Responsibility Assignment},
        booktitle = {Proceedings of the ACM/IEEE 14th International Conference on Model Driven Engineering Languages and Systems},
        pages = {455--469},
        year = 2011,
        month = {oct},
    }
    [akiyama-models2011]: as a page
  25. Shinpei Hayashi, Takashi Yoshikawa, Motoshi Saeki: "Sentence-to-Code Traceability Recovery with Domain Ontologies". In Proceedings of the 17th Asia Pacific Software Engineering Conference (APSEC 2010), pp. 385-394. Sydney, Australia, nov, 2010.
    ID
    DOI: 10.1109/APSEC.2010.51
    Abstract
    We propose an ontology-based technique for recovering traceability links between a natural language sentence specifying features of a software product and the source code of the product. Some software products have been released without detailed documentation. To automatically detect code fragments associated with sentences describing a feature, the relations between source code structures and problem domains are important. We model the knowledge of the problem domains as domain ontologies having concepts of the domains and their relations. Using semantic relations on the ontologies in addition to method invocation relations and the similarity between an identifier on the code and words in the sentences, we locate the code fragments corresponding to the given sentences. Additionally, our prioritization mechanism which orders the located results of code fragments based on the ontologies enables users to select and analyze the results effectively. To show effectiveness of our approach in terms of accuracy, a case study was carried out with our proof-ofconcept tool and summarized.
    Slide
    BibTeX
    @inproceedings{hayashi-apsec2010,
        author = {Shinpei Hayashi and Takashi Yoshikawa and Motoshi Saeki},
        title = {Sentence-to-Code Traceability Recovery with Domain Ontologies},
        booktitle = {Proceedings of the 17th Asia Pacific Software Engineering Conference},
        pages = {385--394},
        year = 2010,
        month = {nov},
    }
    [hayashi-apsec2010]: as a page
  26. Takanori Ugai, Shinpei Hayashi, Motoshi Saeki: "Visualizing Stakeholder Concerns with Anchored Map". In Proceedings of the 5th International Workshop on Requirements Engineering Visualization (REV 2010), co-located with RE 2010, pp. 20-24. Sydney, Australia, sep, 2010.
    ID
    DOI: 10.1109/REV.2010.5625662
    Abstract
    Software development is a cooperative work by stakeholders. It is important for project managers and analysts to understand stakeholder concerns and to identify potential problems such as imbalance of stakeholders or lack of stakeholders.\\This paper presents a tool which visualizes the strength of stakeholders' interest of concern on two dimensional screens. The proposed tool generates an anchored map from an attributed goal graph by AGORA, which is an extended version of goal-oriented analysis methods. It has information on stakeholders' interest to concerns and its degree as the attributes of goals.\\Results from the case study are that (1) some concerns are not connected to any stakeholders and (2) a type of stakeholders is interested in different concerns each other. The results suggest that lack of stakeholders for the unconnected concerns and need that a type of stakeholders had better to unify their requirements.
    Slide
    BibTeX
    @inproceedings{ugai-rev2010,
        author = {Takanori Ugai and Shinpei Hayashi and Motoshi Saeki},
        title = {Visualizing Stakeholder Concerns with Anchored Map},
        booktitle = {Proceedings of the 5th International Workshop on Requirements Engineering Visualization},
        pages = {20--24},
        year = 2010,
        month = {sep},
    }
    [ugai-rev2010]: as a page
  27. Shinpei Hayashi, Katsuyuki Sekine, Motoshi Saeki: "iFL: An Interactive Environment for Understanding Feature Implementations". In Proceedings of the 26th IEEE International Conference on Software Maintenance (ICSM 2010), ERA Track, pp. 1-5. Timisoara, Romania, sep, 2010.
    ID
    DOI: 10.1109/ICSM.2010.5609669
    Abstract
    We propose iFL, an interactive environment that is useful for effectively understanding feature implementation by application of feature location (FL). With iFL, the inputs for FL are improved incrementally by interactions between users and the FL system. By understanding a code fragment obtained using FL, users can find more appropriate queries from the identifiers in the fragment. Furthermore, the relevance feedback obtained by partially judging whether or not a fragment is relevant improves the evaluation score of FL. Users can then obtain more accurate results. Case studies with iFL show that our interactive approach is feasible and that it can reduce the understanding cost more effectively than the non-interactive approach.
    Slide
    BibTeX
    @inproceedings{hayashi-icsm2010,
        author = {Shinpei Hayashi and Katsuyuki Sekine and Motoshi Saeki},
        title = {{iFL}: An Interactive Environment for Understanding Feature Implementations},
        booktitle = {Proceedings of the 26th IEEE International Conference on Software Maintenance},
        pages = {1--5},
        year = 2010,
        month = {sep},
    }
    [hayashi-icsm2010]: as a page
  28. Shinpei Hayashi, Motoshi Saeki: "Recording Finer-Grained Software Evolution with IDE: An Annotation-Based Approach". In Proceedings of the 4th International Joint ERCIM/IWPSE Symposium on Software Evolution (IWPSE-EVOL 2010), co-located with ASE 2010, pp. 8-12. Antwerp, Belgium, sep, 2010.
    ID
    DOI: 10.1145/1862372.1862378
    ISBN: 978-1-4503-0128-2
    Abstract
    This paper proposes a formalized technique for generating finer-grained source code deltas according to a developer's editing intentions. Using the technique, the developer classifies edit operations of source code by annotating the time series of the edit history with the switching information of their editing intentions. Based on the classification, the history is sorted and converted automatically to appropriate source code deltas to be committed separately to a version repository. This paper also presents algorithms for automating the generation process and a prototyping tool to implement them.
    Slide
    BibTeX
    @inproceedings{hayashi-iwpse-evol2010,
        author = {Shinpei Hayashi and Motoshi Saeki},
        title = {Recording Finer-Grained Software Evolution with {IDE}: An Annotation-Based Approach},
        booktitle = {Proceedings of the 4th International Joint ERCIM/IWPSE Symposium on Software Evolution},
        pages = {8--12},
        year = 2010,
        month = {sep},
    }
    [hayashi-iwpse-evol2010]: as a page
  29. Motoshi Saeki, Shinpei Hayashi, Haruhiko Kaiya: "An Integrated Support for Attributed Goal-Oriented Requirements Analysis Method and Its Implementation". In Proceedings of the 10th International Conference on Quality Software (QSIC 2010), pp. 357-360. jul, 2010.
    ID
    DOI: 10.1109/QSIC.2010.19
    Abstract
    This paper presents an integrated supporting tool for Attributed Goal-Oriented Requirements Analysis (AGORA), which is an extended version of goal-oriented analysis. Our tool assists seamlessly requirements analysts and stakeholders in their activities throughout AGORA steps including constructing goal graphs with group work, utilizing domain ontologies for goal graph construction, detecting various types of conflicts among goals, prioritizing goals, analyzing impacts when modifying a goal graph, and version control of goal graphs.
    BibTeX
    @inproceedings{saeki-qsic2010,
        author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya},
        title = {An Integrated Support for Attributed Goal-Oriented Requirements Analysis Method and Its Implementation},
        booktitle = {Proceedings of the 10th International Conference on Quality Software},
        pages = {357--360},
        year = 2010,
        month = {jul},
    }
    [saeki-qsic2010]: as a page
  30. Motoshi Saeki, Shinpei Hayashi, Haruhiko Kaiya: "A Tool for Attributed Goal-Oriented Requirements Analysis". In Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering (ASE 2009), pp. 670-672. Auckland, New Zealand, nov, 2009.
    ID
    DOI: 10.1109/ASE.2009.34
    Abstract
    This paper presents an integrated supporting tool for Attributed Goal-Oriented Requirements Analysis (AGORA), which is an extended version of goal-oriented analysis. Our tool assists seamlessly requirements analysts and stakeholders in their activities throughout AGORA steps including constructing goal graphs with group work, prioritizing goals, and version control of goal graphs.
    BibTeX
    @inproceedings{saeki-ase2009,
        author = {Motoshi Saeki and Shinpei Hayashi and Haruhiko Kaiya},
        title = {A Tool for Attributed Goal-Oriented Requirements Analysis},
        booktitle = {Proceedings of the 24th IEEE/ACM International Conference on Automated Software Engineering},
        pages = {670--672},
        year = 2009,
        month = {nov},
    }
    [saeki-ase2009]: as a page
  31. Rodion Moiseev, Shinpei Hayashi, Motoshi Saeki: "Generating Assertion Code from OCL: A Transformational Approach Based on Similarities of Implementation Languages". In Proceedings of the ACM/IEEE 12th International Conference on Model Driven Engineering Languages and Systems (MODELS 2009), Lecture Notes in Computer Science, vol. 5795, pp. 650-664. Denver, Colorado, USA, oct, 2009.
    ID
    DOI: 10.1007/978-3-642-04425-0_52
    Abstract
    The Object Constraint Language (OCL) carries a platform independent characteristic allowing it to be decoupled from implementation details, and therefore it is widely applied in model transformations used by model-driven development techniques. However, OCL can be found tremendously useful in the implementation phase aiding assertion code generation and allowing system verification. Yet, taking full advantage of OCL without destroying its platform independence is a difficult task. This paper proposes an approach for generating assertion code from OCL constraints by using a model transformation technique to abstract language specific details away from OCL high-level concepts, showing wide applicability of model transformation techniques. We take advantage of structural similarities of implementation languages to describe a rewriting framework, which is used to easily and flexibly reformulate OCL constraints into any target language, making them executable on any platform. A tool is implemented to demonstrate the effectiveness of this approach.
    Slide
    BibTeX
    @inproceedings{rodion-models2009,
        author = {Rodion Moiseev and Shinpei Hayashi and Motoshi Saeki},
        title = {Generating Assertion Code from OCL: A Transformational Approach Based on Similarities of Implementation Languages},
        booktitle = {Proceedings of the ACM/IEEE 12th International Conference on Model Driven Engineering Languages and Systems},
        pages = {650--664},
        year = 2009,
        month = {oct},
    }
    [rodion-models2009]: as a page
  32. Hiroshi Kazato, Rafael Weiss, Shinpei Hayashi, Takashi Kobayashi, Motoshi Saeki: "Model-View-Controller Architecture Specific Model Transformation". In Proceedings of the 9th OOPSLA Workshop on Domain-Specific Modeling (DSM 2009), co-located with OOPSLA 2009. Orlando, Florida, USA, oct, 2009.
    Abstract
    In this paper, we propose a model-driven development technique specific to the Model-View-Controller architecture domain. Even though a lot of application frameworks and source code generators are available for implementing this architecture, they do depend on implementation specific concepts, which take much effort to learn and use them. To address this issue, we define a UML profile to capture architectural concepts directly in a model and provide a bunch of transformation mappings for each supported platform, in order to bridge between architectural and implementation concepts. By applying these model transformations together with source code generators, our MVC-based model can be mapped to various kind of platforms. Since we restrict a domain into MVC architecture only, automating model transformation to source code is possible. We have prototyped a supporting tool and evaluated feasibility of our approach through a case study. It demonstrates model transformations specific to MVC architecture can produce source code for two different platforms.
    BibTeX
    @inproceedings{kazato-dsm2009,
        author = {Hiroshi Kazato and Rafael Weiss and Shinpei Hayashi and Takashi Kobayashi and Motoshi Saeki},
        title = {Model-View-Controller Architecture Specific Model Transformation},
        booktitle = {Proceedings of the 9th OOPSLA Workshop on Domain-Specific Modeling},
        year = 2009,
        month = {oct},
    }
    [kazato-dsm2009]: as a page
  33. Takashi Yoshikawa, Shinpei Hayashi, Motoshi Saeki: "Recovering Traceability Links between a Simple Natural Language Sentence and Source Code Using Domain Ontologies". In Proceedings of the 25th International Conference on Software Maintenance (ICSM 2009), pp. 551-554. Edmonton, Canada, sep, 2009.
    ID
    DOI: 10.1109/ICSM.2009.5306390
    URL
    https://sites.google.com/site/ieeeicsm09/
    Abstract
    This paper proposes an ontology-based technique for recovering traceability links between a natural language sentence specifying features of a software product and the source code of the product. Some software products have been released without detailed documentation. To automatically detect code fragments associated with the functional descriptions written in the form of simple sentences, the relationships between source code structures and problem domains are important. In our approach, we model the knowledge of the problem domains as domain ontologies. By using semantic relationships of the ontologies in addition to method invocation relationships and the similarity between an identifier on the code and words in the sentences, we can detect code fragments corresponding to the sentences. A case study within a domain of painting software shows that we obtained results of higher quality than without ontologies.
    BibTeX
    @inproceedings{yoshikawa-icsm2009,
        author = {Takashi Yoshikawa and Shinpei Hayashi and Motoshi Saeki},
        title = {Recovering Traceability Links between a Simple Natural Language Sentence and Source Code Using Domain Ontologies},
        booktitle = {Proceedings of the 25th International Conference on Software Maintenance},
        pages = {551--554},
        year = 2009,
        month = {sep},
    }
    [yoshikawa-icsm2009]: as a page
  34. Kohei Uno, Shinpei Hayashi, Motoshi Saeki: "Constructing Feature Models using Goal-Oriented Analysis". In Proceedings of the 9th International Conference on Quality Software (QSIC 2009), pp. 412-417. aug, 2009.
    ID
    DOI: 10.1109/QSIC.2009.61
    Abstract
    This paper proposes a systematic approach to derive feature models required in a software product line development. In our approach, we use goal graphs constructed by goal-oriented requirements analysis. We merge multiple goal graphs into a graph, and then regarding the leaves of the merged graph as the candidates of features, identify their commonality and variability based on the achievability of product goals. Through a case study of a portable music player domain, we obtained a feature model with high quality.
    BibTeX
    @inproceedings{uno-qsic2009,
        author = {Kohei Uno and Shinpei Hayashi and Motoshi Saeki},
        title = {Constructing Feature Models using Goal-Oriented Analysis},
        booktitle = {Proceedings of the 9th International Conference on Quality Software},
        pages = {412--417},
        year = 2009,
        month = {aug},
    }
    [uno-qsic2009]: as a page
  35. Shinpei Hayashi, Yasuyuki Tsuda, Motoshi Saeki: "Detecting Occurrences of Refactoring with Heuristic Search". In Proceedings of the 15th Asia-Pacific Software Engineering Conference (APSEC 2008), pp. 453-460. Beijing, China, dec, 2008.
    ID
    DOI: 10.1109/APSEC.2008.9
    ISSN: 1530-1362
    ISBN: 978-0-7695-3446-6
    Abstract
    This paper proposes a novel technique to detect the occurrences of refactoring from a version archive, in order to reduce the effort spent in understanding what modifications have been applied. In a real software development process, a refactoring operation may sometimes be performed together with other modifications at the same revision. This means that understanding the differences between two versions stored in the archive is not usually an easily process. In order to detect these impure refactorings, we model the detection within a graph search. Our technique considers a version of a program as a state and a refactoring as a transition. It then searches for the path that approaches from the initial to the final state. To improve the efficiency of the search, we use the source code differences between the current and the final state for choosing the candidates of refactoring to be applied next and estimating the heuristic distance to the final state. We have clearly demonstrated the feasibility of our approach through a case study.
    Slide
    BibTeX
    @inproceedings{hayashi-apsec2008,
        author = {Shinpei Hayashi and Yasuyuki Tsuda and Motoshi Saeki},
        title = {Detecting Occurrences of Refactoring with Heuristic Search},
        booktitle = {Proceedings of the 15th Asia-Pacific Software Engineering Conference},
        pages = {453--460},
        year = 2008,
        month = {dec},
    }
    [hayashi-apsec2008]: as a page
  36. Takeshi Obayashi, Shinpei Hayashi, Motoshi Saeki, Hiroyuki Ohta, Kengo Kinoshita: "Preperation and usage of gene coexpression data". In the 19th International Conference on Arabidopsis Research (ICAR 2008). Montreal, Canada, jun, 2008.
    Abstract
    Gene coexpression provides key information to understand living systems because coexpressed genes are often involved in the same or related biological pathways. Coexpression data are now used for a wide variety of experimental designs, such as gene targeting, regulatory investigations and/or identification of potential partners in protein-protein interactions. We constructed two databases for Arabidopsis (ATTED-II, http://www.atted.bio.titech.ac.jp) and mammals (COXPRESdb, http://coxpresdb.hgc.jp). Based on pairwise gene coexpression, coexpressed gene networks were prepared in these databases. To support gene coexpression, known protein-protein interactions, common metabolic pathways and conserved coexpression were also represented on the networks. We used Google Maps API to visualize large networks interactively. The relationships of the coexpression database with other large-scale data will be discussed, in addition to data construction procedures and typical usages of coexpression data.
    BibTeX
    @misc{obayashi-icar2008,
        author = {Takeshi Obayashi and Shinpei Hayashi and Motoshi Saeki and Hiroyuki Ohta and Kengo Kinoshita},
        title = {Preperation and usage of gene coexpression data},
        howpublished = {In the 19th International Conference on Arabidopsis Research},
        year = 2008,
        month = {jun},
    }
    [obayashi-icar2008]: as a page
  37. Shinpei Hayashi, Motoshi Saeki: "Extracting Prehistories of Software Refactorings from Version Archives". In Large-Scale Knowledge Resources. Construction and Application - Proceedings of the 3rd International Conference on Large-Scale Knowledge Resources (LKR 2008), Lecture Notes in Artificial Intelligence, vol. 4938, pp. 82-89. Tokyo Institute of Technology (Ookayama Campus), Tokyo, Japan, mar, 2008.
    ID
    DOI: 10.1007/978-3-540-78159-2_9
    Abstract
    This paper proposes an automated technique to extract prehistories of software refactorings from existing software version archives, which in turn a technique to discover knowledge for finding refactoring opportunities. We focus on two types of knowledge to extract: characteristic modification histories, and fluctuations of the values of complexity measures. First, we extract modified fragments of code by calculating the difference of the Abstract Syntax Trees in the programs picked up from an existing software repository. We also extract past cases of refactorings, and then we create traces of program elements by associating modified fragments with cases of refactorings for finding the structures that frequently occur. Extracted traces help us identify how and where to refactor programs, and it leads to improve the program design.
    BibTeX
    @inproceedings{hayashi-lkr2008,
        author = {Shinpei Hayashi and Motoshi Saeki},
        title = {Extracting Prehistories of Software Refactorings from Version Archives},
        booktitle = {Large-Scale Knowledge Resources. Construction and Application -- Proceedings of the 3rd International Conference on Large-Scale Knowledge Resources},
        pages = {82--89},
        year = 2008,
        month = {mar},
    }
    [hayashi-lkr2008]: as a page
  38. Shinpei Hayashi, Motoshi Saeki: "Eclipse Plug-ins for Collecting and Analyzing Program Modifications". In Eclipse Technology eXchange Workshop (ETX 2006), co-located with OOPSLA 2006, Poster Session. Oregon Convention Center, Portland, Oregon, USA, oct, 2006.
    Abstract
    In this poster, we discuss the need for collecting and analyzing program modification histories, sequences of fine-grained program editing operations. Then we introduce Eclipse plug-ins that can collect and analyze modification histories, and show its useful application technique that can suggest suitable refactoring opportunities to developers by analyzing histories.
    BibTeX
    @misc{hayashi-etx2006,
        author = {Shinpei Hayashi and Motoshi Saeki},
        title = {Eclipse Plug-ins for Collecting and Analyzing Program Modifications},
        howpublished = {In Eclipse Technology eXchange Workshop},
        year = 2006,
        month = {oct},
    }
    [hayashi-etx2006]: as a page

Services

Program Committee / Reviewer

  1. ICPC 2017
  2. SANER 2016, 2017
  3. WM2SP-16
  4. IWSR 2016
  5. GECCO 2015 SBSE-SS Track, 2016 SBSE Track
  6. NasBASE 2015
  7. IWESEP 2010, 2011, 2012, 2013, 2014
  8. AsianPLoP 2014, 2015, 2016
  9. ICSEA 2014, 2015, 2016
  10. APSEC 2012, 2012-ER, 2013
  11. IWSM/MENSURA 2011
  12. SES 2010, 2011, 2012, 2013, 2014, 2015, 2016
  13. FOSE 2012 in 湯布院, 2013 in 加賀, 2014 in 霧島, 2015 in 天童, 2016 in 琴平
  14. 情報処理学会 論文誌査読委員(2013/6/1-)
  15. 電子情報通信学会 ソサイエティ論文誌編集委員会 査読委員(2010/8/27-)
  16. ICSM 2013 (External Reviewer)
  17. CAiSE 2013 (External Reviewer)
  18. RCIS 2013 (External Reviewer)
  19. PPDP 2009 (External Reviewer)

Steering/Organizing Committee

  1. ER 2016: Publicity Chair and Web Master
  2. SANER 2016: Student Volunteer Co-chairs
  3. IWESEP 2016: Program Co-chairs
  4. ACM ICPC Asia Regional Contest 2012 in Tokyo (2012)
  5. ASE 2006: Student Volunteer
  6. RE'04: Student Volunteer

Awards

  1. 電子情報通信学会ソフトウェアサイエンス研究会研究奨励賞 *2 at Jul, 2016.
  2. 貢献賞 at FOSE 2013, Nov. 30, 2013.
  3. Yamashita SIG Research Award from IPSJ, Mar. 7, 2012.
  4. IEEE Computer Society Japan Chapter FOSE Young Researcher Award at FOSE 2011, Nov. 26, 2011.
  5. Best Paper Award from SES 2010, Aug. 31, 2010.
  6. Seiichi Tejima Doctoral Dissertation Award from Tokyo Institute of Technology, Feb. 24, 2010.
  7. Clark Awards 2003 from Hokkaido University, Mar. 24, 2004.
  8. The Best Score Award from Programming Contest 2003, IPSJ Hokkaido Branch, Mar. 22, 2003.
--
HAYASHI, Shinpei [[ e-mail address ]] [PGP pubkey(C5F14DA2)]